I Tested Every 'Memory' Solution for AI Coding Assistants - Here's What Actually Works
Imagine you have a super-smart robot friend who helps you build with LEGOs! Every day, you tell your robot, "We're building a tall castle, and we use red bricks for the roof."
But the next day, your robot forgets! You have to tell it again about the castle and the red bricks. That's a bit silly, right?
This story is about making those robot friends, called AI helpers, remember things better. It's like giving them a special notebook or a magic brain so they don't forget all your cool building rules. This way, they can help you build even faster and smarter, without you having to say the same things over and over! Hooray for smart robots with good memories!
Every AI coding session starts from scratch. You open Claude Code or Codex, and it has no idea that your team uses JWT with 15-minute expiry, that you migrated from REST to GraphQL last month, or that the payments service is the one thing you never touch on Fridays. You re-explain the same architecture decisions, the same conventions, the same constraints. Every single time. This is not a minor annoyance. It is compounding time loss. The first 5-10 minutes of every session is wasted on context that the assistant already learned yesterday. Over weeks, that adds up to hours. I went looking for solutions and tested everything I could find. What I Tried a) Obsidian Mind (574 stars on GitHub) Obsidian Mind is an Obsidian vault template that gives Claude Code persistent memory. It works by loadi
Every AI coding session starts from scratch. You open Claude Code or Codex, and it has no idea that your team uses JWT with 15-minute expiry, that you migrated from REST to GraphQL last month, or that the payments service is the one thing you never touch on Fridays. You re-explain the same architecture decisions, the same conventions, the same constraints. Every single time.
This is not a minor annoyance. It is compounding time loss. The first 5-10 minutes of every session is wasted on context that the assistant already learned yesterday. Over weeks, that adds up to hours. I went looking for solutions and tested everything I could find.
What I Tried
a) Obsidian Mind (574 stars on GitHub)
Obsidian Mind is an Obsidian vault template that gives Claude Code persistent memory. It works by loading vault context through CLAUDE.md on session start. You get slash commands like /standup and /dump to interact with your knowledge base.
It is genuinely elegant if you already live in Obsidian. The setup is minimal, the vault structure makes sense, and the community around it is active. For pure Claude Code workflows, it does the job well.
The limitation: it only works with Claude Code. If you switch to Codex for a background task or try Gemini CLI for a second opinion, your memory stays locked in the Obsidian vault. You also need Obsidian installed, which is a non-trivial dependency if your team does not already use it.
b) Claude Code's Built-in Memory
Claude Code writes MEMORY.md files automatically to ~/.claude/. Zero setup required. It happens in the background as you work, capturing things the model thinks are important.
The convenience factor is real. You do not configure anything, and it just starts remembering. For solo Claude Code users who never switch tools, this might be all you need.
The limitations are predictable: it only works inside Claude Code, the memories are unstructured free text, there is no search, and nothing carries over to other assistants. You also have no control over what gets saved or how it is organized.
c) Custom CLAUDE.md / instructions.md Files
The manual approach. You maintain a context file that each tool reads on startup. Claude Code reads CLAUDE.md, Codex reads AGENTS.md, Gemini CLI reads GEMINI.md. You write your architecture decisions, conventions, and constraints by hand.
This works everywhere because every tool supports some form of instruction file. You have complete control over the content. Nothing is hidden, nothing is auto-generated.
The cost is maintenance. You are now a documentation writer for your AI assistants. Files diverge between tools. You forget to update one. The Codex file says you use REST, the Claude file says GraphQL. Nobody catches it until something breaks.
d) Delimit remember/recall
This is what I built. Delimit stores memories in ~/.delimit/memory/ as structured entries with auto-generated tags. The same memory pool is accessible from the CLI and from any AI assistant that supports MCP.
npx delimit-cli remember "JWT expiry is 15min, refresh tokens are 7 days" npx delimit-cli recall jwtnpx delimit-cli remember "JWT expiry is 15min, refresh tokens are 7 days" npx delimit-cli recall jwtEnter fullscreen mode
Exit fullscreen mode
The remember command saves a tagged entry. The recall command searches across all your memories with fuzzy matching. The MCP server exposes the same store through delimit_memory_store and delimit_memory_search, so Claude Code, Codex, Gemini CLI, and Cursor all read from and write to the same pool.
Memory is actually just one feature. Delimit is an API governance tool (breaking change detection, policy enforcement, CI integration), and the memory system exists because cross-session context turned out to be a prerequisite for reliable governance. You get both.
The limitation: it is newer and less battle-tested than Obsidian Mind's vault approach. The community is smaller. If you want a rich, interlinked knowledge graph, Obsidian Mind's vault structure is more sophisticated.
Comparison
Feature Obsidian Mind Built-in Memory Manual Files Delimit
Claude Code Yes Yes Yes Yes
Codex No No Yes Yes
Gemini CLI No No Yes Yes
Cursor No No Yes Yes
Zero config Yes Yes No Yes
Structured search No No No Yes
Auto-tagging No No No Yes
API governance No No No Yes
Requires Obsidian Yes No No No
When to Use What
If you are all-in on Claude Code and already use Obsidian for your notes, Obsidian Mind is the right choice. It fits naturally into an existing workflow and the vault-based approach gives you rich, interlinked context that a flat memory store cannot match. Do not switch away from something that works.
If you use multiple AI assistants, or if you want governance tooling alongside memory, Delimit is built for that. One memory pool, every tool reads it. If you want maximum control and do not mind maintaining files by hand, the manual approach is honest work. It scales poorly, but it never surprises you.
Try It
npx delimit-cli remember "your first memory" npx delimit-cli recallnpx delimit-cli remember "your first memory" npx delimit-cli recallEnter fullscreen mode
Exit fullscreen mode
No install required. Memories persist in ~/.delimit/memory/ and are available to any MCP-compatible assistant.
-
GitHub
-
npm
-
Docs
Sign in to highlight and annotate this article

Conversation starters
Daily AI Digest
Get the top 5 AI stories delivered to your inbox every morning.
More about
claudegeminimodel
AI Disobeys Shutdown Orders to Save Its Peers
New research has found that AI protects its own, with AI models found to practice peer preservation behaviour in simulated environments. Researchers from the University of California, Berkeley, and the University of California, Santa Cruz, tested seven frontier AI models in an experiment that would see the models follow instructions that would ultimately lead to [ ] The post AI Disobeys Shutdown Orders to Save Its Peers appeared first on DIGIT .
Knowledge Map
Connected Articles — Knowledge Graph
This article is connected to other articles through shared AI topics and tags.
More in Products

A look at Catches and other startups that are offering AI tools to let shoppers visualize fit and style before buying clothes, aiming to curb online returns (Elsa Ohlen/CNBC)
Elsa Ohlen / CNBC : A look at Catches and other startups that are offering AI tools to let shoppers visualize fit and style before buying clothes, aiming to curb online returns It pinches here; drags there; the draping is wrong. These are some of the examples of the feedback a new crop






Discussion
Sign in to join the discussion
No comments yet — be the first to share your thoughts!