Claude Code's Memory Problem and Its DIY Fix
Anthropic's /dream feature fixes Claude Code's memory decay, but most users can't access it. Here's how the system works and how to fix it yourself.
Written by AI. Marcus Chen-Ramirez
March 25, 2026

Photo: Chase AI / YouTube
Anthropic recently shipped a feature called /dream for Claude Code that addresses something most users probably didn't realize was broken: the assistant's memory gradually becomes less useful the longer you use it. The catch is that /dream has only rolled out to a small percentage of users, tucked behind a feature flag. The interesting part? Because Claude Code is an npm package and the prompt is public on GitHub, you can replicate the functionality yourself.
Chase, an AI developer and educator, walked through both the problem and the workaround in a recent video. What emerges is a window into how AI coding assistants actually maintain state across sessions—and why that's harder than it sounds.
How Claude Code's Memory Actually Works
Claude Code uses what it calls "auto-memory"—a system that creates persistent knowledge about you and your projects without requiring manual input. Tell Claude Code once that you go to the gym on Tuesdays, and it writes a markdown file recording that fact. Next time you mention scheduling something, it can reference that preference.
These memory files live in a .claude folder, separate from your project directory. Each project gets its own memory folder containing individual markdown files—one about your testing preferences, another about your coding conventions, maybe one about your schedule. A master MEMORY.md file acts as an index, pointing to all the individual memory files.
"The master memory file references all of the other memory files inside of that folder," Chase explains. "Think of it kind of like how skills work, right? You know how skills essentially documents loaded into cloud code that says here's all the skills and here's descriptions."
The system also saves complete session transcripts as JSONL files—every message you send, every tool Claude calls. These accumulate silently in the background, creating a comprehensive record of your interactions.
The architecture is clever. Too clever, maybe, because it creates problems that compound over time.
The Decay Problem
Imagine you tell Claude Code on Monday that you always use React for frontend work. On Wednesday, you decide React is overrated and you're switching to Vue. Claude Code dutifully creates a second memory file reflecting your new preference. Both files now exist. Both get referenced in the master index. Claude Code now has contradictory information about your framework preferences, and nothing in the system naturally resolves that conflict.
Or consider relative dates. You tell Claude that a feature is due "next Friday." It saves that phrase. But what does "next Friday" mean two weeks from now? Four weeks? The memory becomes less accurate simply by existing.
Then there's bloat. The master memory file loads at the start of every session, creating context overhead. Individual memory files can be verbose. Chase notes the master file can grow to 200 lines, though "just like with cloud MD, less is more."
The result: Claude Code can actually become dumber the longer you use it, weighted down by stale data, contradictions, and redundant information. Your AI assistant develops something like digital dementia.
Enter Dream: Memory Consolidation for AI
Anthropic's /dream command runs a four-phase consolidation process that Chase compares to how your brain organizes memories during sleep. The analogy is apt—both involve reviewing recent activity and deciding what to keep, what to merge, what to discard.
First, Dream reads the existing master memory file to understand what's currently stored. Second, it reviews the last five sessions' transcripts to see how you're actually using Claude Code day-to-day. Third, it merges duplicate files, resolves contradictions, updates relative dates to absolute ones ("next Friday" becomes "March 27th"), and removes stale information. Fourth, it prunes the master index to keep it tight.
"It's going to merge these memory files so it's not bloated. It's going to fix dates. It's no longer next Friday. It's, you know, March 27th," Chase demonstrates.
In his test run, Dream identified seven issues: near duplicates, contradictions, stale data in multiple files, a relative date, a code convention that shouldn't be in memory, and verbose entries. It consolidated two files, updated four, pruned three, and left five unchanged.
The feature represents a rare case where an AI system acknowledges its own limitations and builds in self-maintenance. Most tools accumulate cruft indefinitely; Dream actively fights entropy.
The Workaround: Public Prompts as Democratization
Because the Dream prompt is public—users with early access shared it on GitHub—anyone can recreate the functionality as a custom skill. Chase provides instructions: copy the prompt from Piebald AI's GitHub repository, tell Claude Code to create a new skill called "dream" using that prompt, and you're done.
He also suggests adding flag options: /dream runs at the project level, /dream user runs at the user level (affecting all projects), and /dream all does both. This gives you granular control over which memories get consolidated.
The workaround raises interesting questions about access and openness in AI development. Anthropic chose to slow-roll this feature, presumably to monitor its impact and iterate carefully. But the underlying technology isn't proprietary magic—it's a prompt, which means it's essentially a set of instructions. Once those instructions are public, the feature flag becomes less a technical limitation than a suggestion.
This pattern—official features being reverse-engineered and distributed before full rollout—is becoming common in AI tools. It's partly a function of how these systems work (prompts are just text), partly a result of enthusiastic user communities, and partly a consequence of companies choosing to be relatively open about their architectures.
What This Reveals About AI Assistant Design
The memory-decay problem and the Dream solution illuminate a broader challenge in AI assistant design: maintaining accurate, useful state over extended periods is hard.
Traditional software doesn't have this problem because it doesn't interpret. Your calendar doesn't need to consolidate memories about your scheduling preferences—it just stores events. But an AI assistant that "knows" you prefer morning meetings needs some mechanism to update that knowledge when your preferences change, to reconcile conflicts when you contradict yourself, and to clear out information that's no longer relevant.
Dream represents one approach: periodic, sleep-like consolidation. But other architectures are possible. Memory could be time-stamped and weighted by recency. Contradictory information could trigger explicit clarification requests. The master index could use vector similarity to group related concepts rather than maintaining flat markdown files.
Chase describes Dream as "one of those on the margin plays. It's not going to change how you approach projects, but it's essentially pure value, right? There's no real downside to making sure the memory is tighter and up to date."
That's true, but it might undersell the significance. If AI coding assistants are going to function as persistent collaborators—genuinely learning your patterns and preferences over weeks and months—they need robust mechanisms for managing that accumulated knowledge. Dream is a first attempt at solving a problem that will only become more important as these tools mature.
The fact that most users didn't notice their assistant's memory was decaying suggests either the problem is subtle or expectations are low. Possibly both. But as AI assistants become more capable and more central to development workflows, the difference between an assistant with clean, accurate memory and one carrying contradictory baggage will matter more.
—Marcus Chen-Ramirez, Senior Technology Correspondent
Watch the Original Video
Claude Code's Hidden /dream Feature MASSIVELY Upgrades Memory
Chase AI
8m 54sAbout This Source
Chase AI
Chase AI is a dynamic YouTube channel that has quickly attracted 31,100 subscribers since its inception in December 2025. The channel is dedicated to demystifying no-code AI solutions, making them accessible to both individuals and businesses, regardless of their technical expertise. With a cross-platform reach of over 250,000, Chase AI is a vital resource for those looking to integrate AI into daily operations and improve workflow efficiency.
Read full source profileMore Like This
GSD Framework: Redefining Solo Developer Dynamics
Explore how GSD transforms solo app development amid tech's regulatory shifts.
Karpathy's Obsidian Setup Challenges RAG Orthodoxy
Andrej Karpathy's markdown-based knowledge system questions whether most developers actually need traditional RAG systems at all.
Anthropic's Claude Code Integration: A Legal Minefield
Developer Theo navigates murky legal waters integrating Claude Code with T3 Code while Anthropic stays silent on crucial questions.
Claude Code's Secret Memory Feature Solves AI Amnesia
Anthropic quietly added 'autodream' to Claude Code—a feature that consolidates AI memories like human sleep. Here's what it means for developers.