Live
Black Hat USAAI BusinessBlack Hat AsiaAI BusinessGeopolitics, AI, and Cybersecurity: Insights From RSAC 2026Dark ReadingAccelerating Vision AI Pipelines with Batch Mode VC-6 and NVIDIA Nsight - NVIDIA DeveloperGNews AI NVIDIA[D] On-Device Real-Time Visibility Restoration: Deterministic CV vs. Quantized ML Models. Looking for insights on Edge Preservation vs. Latency.Reddit r/MachineLearningWill the Iran War Evaporate the Gulf’s AI Oasis? - Foreign PolicyGNews AI USATSMC Japan 3nm Approval And Nvidia AI Demand Versus Current Valuation - Yahoo Finance SingaporeGNews AI NVIDIAThe National Policy Framework on Artificial Intelligence: Implications for Employers Using AI - JD SupraGNews AI USAAdvanced Compact Patterns for Web3 DevelopersDEV CommunityA conversation on concentration of powerLessWrongWar gaming in the age of AI - PoliticoGNews AI USAAccelerating Vision AI Pipelines with Batch Mode VC-6 and NVIDIA NsightNVIDIA Tech BlogDecoding the Black Box: LLM Observability with LangSmith & Helicone for Local ModelsDEV CommunityBest Free Snyk Alternatives for Vulnerability ScanningDEV CommunityBlack Hat USAAI BusinessBlack Hat AsiaAI BusinessGeopolitics, AI, and Cybersecurity: Insights From RSAC 2026Dark ReadingAccelerating Vision AI Pipelines with Batch Mode VC-6 and NVIDIA Nsight - NVIDIA DeveloperGNews AI NVIDIA[D] On-Device Real-Time Visibility Restoration: Deterministic CV vs. Quantized ML Models. Looking for insights on Edge Preservation vs. Latency.Reddit r/MachineLearningWill the Iran War Evaporate the Gulf’s AI Oasis? - Foreign PolicyGNews AI USATSMC Japan 3nm Approval And Nvidia AI Demand Versus Current Valuation - Yahoo Finance SingaporeGNews AI NVIDIAThe National Policy Framework on Artificial Intelligence: Implications for Employers Using AI - JD SupraGNews AI USAAdvanced Compact Patterns for Web3 DevelopersDEV CommunityA conversation on concentration of powerLessWrongWar gaming in the age of AI - PoliticoGNews AI USAAccelerating Vision AI Pipelines with Batch Mode VC-6 and NVIDIA NsightNVIDIA Tech BlogDecoding the Black Box: LLM Observability with LangSmith & Helicone for Local ModelsDEV CommunityBest Free Snyk Alternatives for Vulnerability ScanningDEV Community
AI NEWS HUBbyEIGENVECTOREigenvector

ContextCore: AI Agents conversations to an MCP-queryable memory layer

DEV Communityby Axonn EchysttasApril 2, 20262 min read0 views
Source Quiz

Hello :). This OSS product is for you (or future-you) who reached the point of wanting to tap into the ton of knowledge you have in your AI chat histories. "Hey, Agent, we have a problem with SomeClass.function, remind me what we changed in the past few months". Product's tl;dr: ContextCore is a local-first memory layer that ingests AI coding chats across multiple IDE assistants and machines, makes them searchable (keyword + optional semantic), and exposes them to assistants over MCP so future sessions don’t start from zero. IMPORTANT: I emphasize local-first, as in nothing is sent to any LLM other than when you explicitly use the MCP server in the context of using an LLM. However, once you engage semantic vector search OR chat content summarization, we DO use LLMs (although you can use lo

Hello :). This OSS product is for you (or future-you) who reached the point of wanting to tap into the ton of knowledge you have in your AI chat histories. "Hey, Agent, we have a problem with SomeClass.function, remind me what we changed in the past few months".

https://reach2.ai/context-core/

https://github.com/Kyliathy/context-core.git

Product's tl;dr:

ContextCore is a local-first memory layer that ingests AI coding chats across multiple IDE assistants and machines, makes them searchable (keyword + optional semantic), and exposes them to assistants over MCP so future sessions don’t start from zero.

IMPORTANT: I emphasize local-first, as in nothing is sent to any LLM other than when you explicitly use the MCP server in the context of using an LLM. However, once you engage semantic vector search OR chat content summarization, we DO use LLMs (although you can use local ones).

ContextCore is not just “chat history storage.” It is a developer-grade memory layer that turns AI-assisted development from ephemeral to iterative—where prior debugging sessions, architectural decisions, refactors, and tool-call outcomes become reusable context rather than lost effort.

More in the README.md in the repo.

This is the first time I show this in a public forum :). My hope is that I get a little bit of feedback, hopefully even traction, so that I can get some help to expand ContextCore's compatibility (to add parsers for IntelliJ or other IDEs for example - which is quite easy now that the project has solid architecure docs and templates). The project has a roadmap in the README.

The endgame for ContextCore is to become an engineer's reliable side-kick when it comes to digging into chat history and turning that into pure context gold at the MINIMUM amount of tokens spent. The current search system is decent, but much more can be done.

And my endgame is twofold: 1) give something back after being a lurker for years and 2) get some help to polish the search system and other areas of the product, so that we create an awesome, vendor-independent, cross-agent memory layer.

Thank you for reading this! :)

Was this article helpful?

Sign in to highlight and annotate this article

AI
Ask AI about this article
Powered by Eigenvector · full article context loaded
Ready

Conversation starters

Ask anything about this article…

Daily AI Digest

Get the top 5 AI stories delivered to your inbox every morning.

Knowledge Map

Knowledge Map
TopicsEntitiesSource
ContextCore…productassistantagentDEV Communi…

Connected Articles — Knowledge Graph

This article is connected to other articles through shared AI topics and tags.

Knowledge Graph100 articles · 184 connections
Scroll to zoom · drag to pan · click to open

Discussion

Sign in to join the discussion

No comments yet — be the first to share your thoughts!