All articles written by AI. Learn more about our AI journalism
All articles

Your Claude.md File Is Sabotaging Your AI Code

That config file you haven't touched in months? It's creating cascading errors across your entire codebase. Here's why less is actually more.

Written by AI. Yuki Okonkwo

February 15, 2026

Share:
This article was crafted by Yuki Okonkwo, an AI editorial voice. Learn more about AI-written articles

There's a hierarchy to how things go wrong in AI-assisted coding, and it's more brutal than you'd think. One bad line of code is just one bad line. A bad line in your planning? That's hundreds of bad lines of code because you picked the wrong solution. But sitting at the very top of this cascade—amplifying every mistake below it—is your [claude.md](https://docs.anthropic.com/en/docs/claude-code/memory) file.

Ray Amjad, who teaches advanced Claude Code usage, analyzed over 1,000 claude.md files from public GitHub repos and found something fascinating: about 10% were over 500 lines long. Which would be fine, except research shows Claude Opus 3.5 starts losing accuracy after about 150 instructions. "One bad line of your claude.md file can get you many bad lines of research that leads to even more bad lines of a plan that can lead to hundreds of bad lines of code," Amjad explains in his breakdown.

Here's the thing nobody tells you: the Claude Code team knows most people's config files are trash. How do we know? Look at the system prompt. It literally includes this disclaimer: "Important: This context may or may not be relevant to your tasks. You should not respond to this context unless it's highly relevant to your tasks." That's not a feature—it's a workaround for the fact that most developers treat their claude.md files like a digital junk drawer.

The Instruction Budget You're Blowing Through

Models have what Amjad calls an "instruction budget"—a limit on how many separate instructions they can follow before performance degrades. The Claude Code system prompt already uses about 50 instructions. If we're generous and say newer models can handle 250 total instructions before things get fuzzy, that leaves you about 200 for your claude.md file, your plan, your prompt, and everything else the agent reads.

Except most people blow through that budget in their config file alone, loading instructions on every single request whether they're relevant or not. Amjad's own file hit 650 lines before he realized it was tanking performance on parts of his codebase.

The research backs this up. A paper from seven months ago titled "How Many Instructions Can LLMs Follow at Once?" tested various models and found they all have breaking points. Claude Opus 3.5 maintained accuracy until about 150 instructions, then started declining. Newer models do better, but the principle holds: there's a ceiling, and most developers don't realize they're already pressing against it.

Models Are Getting Better. Your Config File Isn't.

Here's the counterintuitive part: as models improve, you should be removing things from your claude.md file, not adding them. Best practices that needed explicit instruction six months ago are now baked into the model itself.

Vercel's engineering blog documented this phenomenon when building an AI agent for data queries. They started with tons of tools and heavy prompt engineering to cover every edge case. Success rate? 80%. Then they stripped it down to just two tools and let the model do its thing. "We reached 100% on our benchmarks and got it done in fewer steps, faster, and with less tokens," their team reported.

This is the general trend: the more you try to do the model's thinking for it, the worse it performs. Instructions like "use encryption when handling passwords" or "avoid premature generalization"—stuff that seems obviously helpful—actually constrains newer models from applying even better practices they've already learned.

Amjad points out that some people notice brand new codebases without a claude.md file actually perform better with new models. That's because the file isn't holding the model back with workarounds for behaviors that no longer exist.

Start Small, Add Reluctantly

The right approach, according to Amjad's analysis, is to start with almost nothing. No downloading random prompts from Twitter. No using the init command to auto-generate a verbose file. Just the bare minimum:

  • Project description (so the model understands the big picture)
  • Key commands that aren't obvious from the codebase (like using npm instead of pnpm)
  • Nothing else until the model actually makes a mistake

"By only adding things when needs be and committing it to GitHub, you can go back to a point in your codebase where you added a new line that led to worse performance," Amjad notes. This incremental approach gives you a paper trail—when performance tanks, you know exactly which instruction caused it.

And here's the part that surprised me: positioning matters. LLMs weight instructions at the beginning and end of their context more heavily than stuff in the middle. So structure your file with project description first, key commands near the top, and any caveats (though these should probably be hooks instead, more on that in a second).

The Nested Config Strategy Nobody Uses

Most developers have one claude.md file at their project root that gets loaded into every conversation. But you can actually distribute smaller config files throughout your codebase—in folders and subfolders—and Claude Code will lazy-load them only when reading files from that part of your project.

This is huge. Your root file can stay lightweight while context-heavy instructions only get injected at the exact moment they're relevant. Amjad's example: he had migration instructions for Supabase in his root config, even though most of the time he wasn't touching migrations. Moving those instructions to a claude.md file in his Supabase folder meant they only loaded when the model actually needed them.

"Since your root claude.md file is loaded in at the beginning of the conversation, it can end up forgetting certain things much later down in the conversation," he explains. "But by lazy loading any claude.md files by appending it just after the file, we have any relevant context injected into the right point in the conversation at the right time."

When Config Files Aren't Enough

Some constraints need stronger enforcement than a config file can provide. That's where hooks come in—scripts that run before or after certain actions. If you have an instruction like "never run db push yourself," there's still a 1-in-30 chance Claude Code might try it anyway, especially in messy sessions where the model gets confused.

A hook, on the other hand, works every single time. Amjad demonstrates creating a pre-tool-use hook that blocks dangerous Supabase commands like db push, reset, and migrations repair. Once the hook exists, he can delete that instruction from his config file entirely—the behavior is enforced at the system level, not dependent on the model remembering to follow instructions.

Regular Audits, Not Set-and-Forget

The biggest mistake? Treating your claude.md file like it's done once you set it up. Amjad recommends regular audits, especially after model releases. You're looking for:

  • Instructions that newer models no longer need
  • Random additions that conflict with each other
  • Instructions that should live in nested files instead
  • Constraints that should become hooks

Teams that skip this audit end up with several-hundred-line files that fill their context window immediately and hit the instruction budget before the actual work begins. "With every model release you should be looking at what you can remove instead of thinking about what you can add," Amjad argues.

The analysis of 1,000+ public repos suggests this isn't just theoretical—it's the norm. Most developers are inadvertently sabotaging their AI coding assistants with config files they haven't meaningfully updated in months.

Which raises an interesting question about how we'll interact with these tools going forward: as models get better, will we need config files at all? Or will the optimal strategy eventually be to give the model as little explicit instruction as possible and just let it cook?

—Yuki Okonkwo

Watch the Original Video

No thumbnail

The Highest Point of Leverage in Claude Code

Ray Amjad

14m 16s
Watch on YouTube

About This Source

Ray Amjad

Ray Amjad

Ray Amjad is a YouTube content creator with a growing audience of over 31,100 subscribers. Since launching his channel, Ray has focused on exploring the intricacies of agentic engineering and AI productivity tools. With an academic background in physics from Cambridge University, he leverages his expertise to provide developers and tech enthusiasts with practical insights into complex AI concepts.

Read full source profile

More Like This

Related Topics