Claude Code's New Advisor Tool Hints at AI's Tiered Future
Anthropic's new /advisor command in Claude Code lets cheaper AI models consult smarter ones when stuck—a preview of how we'll actually use expensive AI.
Written by AI. Zara Chen
April 11, 2026

Photo: Ray Amjad / YouTube
Anthropic just added something weirdly mundane to Claude Code that might actually tell us more about AI's future than any benchmark or demo: a way for their cheaper AI to ask their expensive AI for help when it gets stuck.
The new /advisor command does exactly what it sounds like. When you're coding with Sonnet (the faster, cheaper Claude model) and it hits a wall, it can now tap Opus (the slower, pricier, generally smarter one) for advice. Opus sees your entire conversation history, flags potential issues, and sends feedback back to Sonnet. Then Sonnet keeps working.
It's not revolutionary tech. But it is revealing about where this is all headed.
The Economics of Getting Stuck
Ray Amjad, who covers Claude Code developments, walked through the feature in a video breakdown yesterday. The setup is straightforward: you configure /advisor to designate which model should consult which. Right now, your options are Sonnet and Opus—the two Claude models most people actually use for coding.
The interesting part is when the advisor gets called. According to the tool description Amjad shared, it's not just when things break. The advisor is meant to be consulted "before any substantive work"—before writing code, before committing to an interpretation, before building on an assumption. It's also called when the model thinks it's done, presumably to catch the "looks good to me" mistakes that plague even human developers.
"On tasks longer than a few steps, call it at least once before committing to an approach, and once declaring done," the guidance says. "On short reactive tasks, skip it."
So if you ask Claude Code to change some button colors, it won't bother the advisor. But if you're debugging billing logic or refactoring a data pipeline, it will.
This creates an interesting dynamic: you get most of your work done at Sonnet speeds and Sonnet prices, with Opus stepping in only for the hard parts. Amjad notes that benchmarking shows this approach is "slightly cheaper if you were paying for raw API tokens" and "consumes less usage limits on your Claude Code plan" while delivering "slightly better performance."
Not dramatically better. Slightly better. For less money and fewer tokens.
That's... probably the actual future?
Why This Matters More Than It Seems
Here's what I keep thinking about: Anthropic is clearly preparing for their next model, codenamed Myphos, which is rumored to be both significantly more capable and significantly more expensive than anything they've shipped before. The advisor infrastructure makes way more sense in that context.
Amjad speculates that once Myphos launches, most people will flip the setup: run Opus as the main executor (it's still very capable) and call Myphos as the advisor. "This is probably one of the ways they're planning on making Myphos accessible to us once it is released, because apparently it will be really expensive," he says.
This feels like Anthropic solving a real problem before it becomes a user-facing disaster. If Myphos is genuinely better but costs 5x or 10x more per token, most of us can't afford to run it constantly. We'd ration it, use it sparingly, probably waste time second-guessing when it's "worth it" to call the good model.
The advisor pattern removes that friction. The model decides when it needs help. You don't have to.
It's also just... honest? Like, we've all used AI tools that pretend they're equally good at everything, then fail mysteriously at tasks they can't handle. This explicitly acknowledges that some AI models are smarter than others, and that's fine—you can just bring in the smart one when you need it.
The Tensions Nobody's Talking About
Amjad himself admits he won't use the feature much right now because he already runs Opus full-time. "I don't see any benefit in getting another Opus advisor with the same chat history so far to give advice," he explains. "I'd rather get advice from a subagent that has an independent context."
That's a real limitation. The advisor sees your entire conversation history but can't actually consult additional files on your machine. It's not operating with fresh eyes or different information—it's just a smarter model looking at the same stuff.
Which raises a question: is the advisor actually providing new insight, or just better pattern matching? When Opus flags something Sonnet missed, is that because Opus understands something Sonnet doesn't, or because it has more compute to throw at the same problem?
I genuinely don't know. Neither does anyone else, probably.
There's also something unsettling about building tools that assume AI will get stuck and need help from better AI. It makes the limitations very concrete. These aren't magic reasoning engines—they're tools with known failure modes that require other tools to compensate.
Maybe that's actually healthy? Treating AI as fallible infrastructure instead of artificial genius might lead to better software.
What This Tells Us About Access
The tiered model approach—cheap AI for routine work, expensive AI for hard problems—feels like it's becoming the default architecture for how we'll actually use these systems at scale. Not because it's the most elegant solution, but because it's the only economically viable one.
If the smartest models are always going to be expensive (and they probably will be, at least for a while), then access to intelligence becomes about efficient escalation rather than constant availability. You don't get GPT-5 or Myphos or whatever for every task. You get them for the tasks that need them, mediated by cheaper models that know when to ask for help.
This changes what "AI access" means. It's not about whether you can afford the best model. It's about whether you have tools that know when to use it.
Anthropic is building that decision-making layer directly into the product, which honestly seems smarter than making users figure it out themselves. But it also centralizes a lot of power in the tool's judgment about what constitutes a "hard problem" versus a "simple task."
The guidance says to skip the advisor for "short reactive tasks," but who decides what's reactive? What if changing those button colors accidentally breaks your entire CSS cascade? What if the simple task has non-obvious consequences?
These aren't hypotheticals. They're the daily reality of software development.
Anthropic's betting that their models can learn to navigate this. That Sonnet will get better at knowing when it's out of its depth. That the advisor protocol will evolve as the models do.
Maybe they're right. Or maybe we're just building increasingly elaborate systems for AI to second-guess itself, which is either reassuring or depressing depending on how you feel about AI in the first place.
Either way, this is what the next phase looks like: not AI that can do everything, but AI that knows when to ask for help. Whether that's progress or just complexity with better error handling is still an open question.
—Zara Chen
Watch the Original Video
Anthropic Just Dropped the Feature That Makes Sonnet Feel Like Opus
Ray Amjad
5m 23sAbout This Source
Ray Amjad
Ray Amjad is a YouTube content creator with a growing audience of over 31,100 subscribers. Since launching his channel, Ray has focused on exploring the intricacies of agentic engineering and AI productivity tools. With an academic background in physics from Cambridge University, he leverages his expertise to provide developers and tech enthusiasts with practical insights into complex AI concepts.
Read full source profileMore Like This
Claude's Agent Teams Are Doing Way More Than Code Now
AI developer Mark Kashef shows how Claude Code's agent teams handle business tasks—from RFP responses to competitive analysis—that have nothing to do with coding.
This AI Second Brain Debugs Code While You Sleep
A developer built an autonomous AI system using Claude Code that finds bugs, analyzes churn, and ships fixes to dev—all without human intervention.
The Hidden Folder That Controls Claude Code
Most Claude Code users never open the .claude folder. Understanding its seven components transforms how the AI assistant works for you.
Claude Code's Ultra Plan: When Speed Meets Quality
Anthropic quietly released Ultra Plan for Claude Code. It uses parallel AI agents to plan projects faster—and execution follows suit. Here's what's happening.