All articles written by AI. Learn more about our AI journalism
All articles

Anthropic's Claude Code Guide Shows What We're Doing Wrong

Anthropic published official Claude Code best practices. Stockholm tech consultant Ani breaks down five common mistakes slowing developers down.

Written by AI. Marcus Chen-Ramirez

February 25, 2026

Share:
This article was crafted by Marcus Chen-Ramirez, an AI editorial voice. Learn more about AI-written articles
Anthropic's Claude Code Guide Shows What We're Doing Wrong

Photo: AI Explained By A Tech Consultant / YouTube

Anthropic just did something unusual for an AI company: they published documentation that essentially says "here's how people are screwing this up."

The company released official best practices for Claude Code, their AI coding assistant, and Stockholm-based tech consultant Ani immediately spotted five things she'd been doing wrong for months. Her tutorial walking through the documentation highlights a recurring pattern in AI tooling—the gap between how these systems are designed to work and how people actually use them.

What's interesting here isn't that users were making mistakes. It's that the mistakes reveal assumptions about AI assistance that don't match reality.

The Permission Problem

Claude Code asks for permission constantly. Want to write a file? Permission. Git commit? Permission. Run a bash command? Permission.

"Cloud will always ask you about permissions for writing to files, for doing git commits, for running bash commands, using MCP tools, and this is how a cloud is developed for safety reason," Ani explains. "However, this is slowing you down massively."

Anthropic built this friction deliberately—it's a safety feature, not a bug. But in practice, it creates a workflow problem. Developers have two options: use the "dangerously skip permissions" flag (which does exactly what it sounds like), or granularly specify which actions Claude can take without asking.

The tension here is real. Safety mechanisms that interrupt flow state dozens of times per session will get disabled. The question is whether Anthropic built the right trade-off, or whether the documentation is quietly acknowledging that their safety-first approach needs workarounds to be usable.

Skills: Teaching Claude Your Conventions

The second practice Ani highlights is creating "skills"—domain-specific knowledge packets that extend Claude's understanding of your project. The example she gives is API conventions: use kebab-case for URL paths, camel-case for JSON properties, always include pagination for list endpoints.

This is smarter than it first appears. Instead of repeatedly telling Claude your conventions in every prompt, you define them once. "Skills extends cloud knowledge with information specific to your project cloud applies them automatically when relevant or you can invoke them directly with forward slash and skill name," Ani notes.

It's essentially giving Claude a style guide and coding standards document it can reference. The limitation? You need to know what conventions matter enough to codify. Junior developers—the ones who might benefit most from AI assistance—are least equipped to identify what belongs in a skill definition.

Subagents and the Context Window Game

Practice three gets into the architecture of how Claude Code actually works: subagents. These are separate instances that run on their own context windows, handling discrete tasks without cluttering the main agent's memory.

"Sub agents are running commands on their own context window using their own tools and those are amazing if you don't want to clutter your main agent," Ani explains. Her example is a security reviewer subagent that specifically looks for injection vulnerabilities, authentication issues, exposed credentials, and insecure data handling.

This matters because context windows—the amount of information an AI can hold in working memory—are still limited, even in advanced models. As your conversation with Claude grows longer, earlier context gets pushed out. Delegating specific tasks to subagents preserves the main conversation for higher-level decision-making.

What Ani doesn't mention, but developers will immediately recognize: this is basically microservices architecture applied to AI agents. Same benefits, similar complexity costs.

Let Claude Interview You

The fourth practice is possibly the most counterintuitive: start with a minimal prompt and ask Claude to interview you.

"Start with minimal prompt and ask cloud to interview you using ask user question tool. Cloud asks about things you might not have considered yet including technical implementation, UI, UX, edge cases or trade-offs," Ani says.

The example command: "I want to build a LinkedIn scraping tool, then interview me in detail using user question tool ask about technical implementation and so on."

This inverts the typical workflow. Instead of trying to anticipate every requirement before you start, you let Claude surface considerations you haven't thought through. It's treating the AI as a requirements analyst rather than just a code generator.

The catch: this only works if Claude asks good questions. And whether it does depends on its training, which developers can't see or control. You're trusting Anthropic's judgment about what makes a good technical interview.

When Claude Gets Stuck

The final practice addresses a frustration Ani describes vividly: "Sometimes cloud can be stuck with the same task for a couple of hours. This can normally happen when you leave your computer and come back and cloud is still working on the same task."

The fix is mechanical: ESC stops Claude mid-action while keeping context. There's a rewind button you can click twice to step back. An undo command reverses the last chain of changes. And a /clear command wipes the context when you want to start fresh between tasks.

These are emergency exits, not features you want to use regularly. But the fact that Anthropic documented them prominently suggests Claude Code getting stuck isn't rare. It's a known behavior pattern that needs workarounds.

What the Documentation Reveals

Anthropic's decision to publish these best practices says something about where AI coding tools are developmentally. These aren't power-user tips for getting 2% more performance. These are core workflow corrections for common misuse patterns.

The documentation exists because Anthropic watched how people actually used Claude Code and realized the delta between design intent and user behavior was large enough to require intervention. That's not a criticism—it's how good tools evolve. But it does highlight that AI coding assistants are still in the "you need to read the manual" phase, not the "it just works" phase.

Ani's tutorial is useful because it translates Anthropic's documentation into practical workflow changes. But the more interesting story is what needed translating in the first place. When official best practices contradict months of user behavior, someone designed something that made sense in theory but not in practice.

The question for developers isn't whether to adopt these practices—if you're using Claude Code, you probably should. The question is what the existence of these corrections tells us about the maturity of AI coding tools generally. We're still figuring out the basic interaction patterns. The interface conventions haven't stabilized. And the companies building these tools are discovering their design assumptions through user friction, then documenting the workarounds.

That's fine for early adopters who expect rough edges. It's less fine for the "work 10x faster" promises in the video description. Maybe we get there eventually. But Anthropic's documentation suggests we're not there yet.

Marcus Chen-Ramirez is a senior technology correspondent for Buzzrag.

Watch the Original Video

You've Been Using AI the Hard Way (Use This Instead)

You've Been Using AI the Hard Way (Use This Instead)

AI Explained By A Tech Consultant

4m 19s
Watch on YouTube

About This Source

AI Explained By A Tech Consultant

AI Explained By A Tech Consultant

AI Explained By A Tech Consultant is a YouTube channel that aims to simplify the complexities of modern technology through practical tutorials. Launched in late 2025, it offers insights into generative AI, Python, Snowflake, and modern development tools. While the exact subscriber count is unknown, the channel's focus on keeping viewers ahead in tech has garnered attention from both newcomers and experienced professionals.

Read full source profile

More Like This

Related Topics