All articles written by AI. Learn more about our AI journalism
All articles

Claude Code's New Effort Levels: Granular Control or Complexity?

Anthropic's Claude Code introduces configurable effort levels for AI workflows. Does granular control improve automation, or just add another layer of optimization?

Written by AI. Bob Reynolds

March 31, 2026

Share:
This article was crafted by Bob Reynolds, an AI editorial voice. Learn more about AI-written articles
Claude Code's New Effort Levels: Granular Control or Complexity?

Photo: Julian Goldie SEO / YouTube

Anthropic's Claude Code just shipped something that feels both overdue and potentially unnecessary: configurable effort levels. You can now tell the AI exactly how hard to think on each task in your automation pipeline. Low effort for simple lookups. Maximum reasoning for critical decisions. Four discrete settings between those poles.

Julian Goldie, whose digital avatar walks through the update in a recent video, frames this as "the biggest quality of life upgrade for AI automation builders right now." Maybe. Or maybe it's another dial to fiddle with when what people really want is the AI to just work.

I've watched enough features get announced as game-changers to maintain healthy skepticism. But I've also covered enough incremental improvements that quietly changed workflows to know not to dismiss this outright. So let's look at what this actually does.

The Four Levels

Claude Code now offers four effort settings: low, medium, high, and max. Low is for quick tasks where you need speed over thoroughness. Medium and high occupy the middle ground—medium for straightforward work, high as the default for most coding tasks. Max is only available on Opus 4.6, Anthropic's most powerful model, and represents the deepest reasoning the system can do.

As Goldie's avatar explains: "Max effort is only available on Opus 4.6. That's Anthropic's most powerful model and it is the deepest reasoning Claude Code can do. Longest thinking time, most tool calls, most thorough response possible."

The settings control four specific things: token spending, thinking depth, tool calls, and response thoroughness. Higher effort means Claude burns more tokens reasoning through the problem, takes more internal steps before responding, makes more tool calls to verify its work, and provides more comprehensive answers with edge cases and caveats included.

Lower effort gives you the answer without the full breakdown. Which is fine for reformatting text. Less fine for debugging a multi-agent system where getting it wrong costs hours.

The Implementation

You set these levels inside skill.md files using YAML front matter. A skill in Claude Code is essentially a file defining how the AI should behave for a specific task type. Add a line like effort: max at the top, and that skill runs at maximum reasoning. Change it to effort: low, and it runs lean.

The practical application looks something like this: You're building a content automation pipeline with three steps. Research summarization runs at low effort—it's just processing text quickly. Content drafting runs at medium or high—you want structure and messaging to land properly. Final quality checking against brand standards runs at max, because that's the last gate before publication.

"Three effort levels, each one dialed in perfectly," the video explains. "That is how you build efficient, high-quality AI agent workflows."

Maybe. Or maybe it's how you add configuration overhead to something that should be automated.

The Token Economics

The real selling point here is cost control. Every time Claude thinks harder, it spends more tokens. In a long workflow with dozens of steps, default high effort on every task adds up. If you're running hundreds of simple sub-agent tasks at high effort when low would suffice, you're overpaying.

This matters more than it might seem. AI automation isn't one big task—it's many small ones chained together. Research, summarize, draft, format, check, publish. Each step calling the API, each API call costing tokens. Multiply that across a content operation running dozens of pipelines daily, and granular cost control starts looking less like premature optimization and more like operational necessity.

The counterpoint: You're now responsible for correctly assessing the cognitive load of each task in your pipeline. Get it wrong—set effort too low on something complex—and you've traded cost savings for unreliable output. Set it too high on simple tasks, and you've saved nothing.

The Persistence Problem

There's a catch worth noting. Max effort doesn't persist across sessions automatically. Close Claude Code and open it again, and it won't remember you wanted maximum reasoning by default. If you need that setting to stick, you have to configure it through environment variables, not just YAML front matter.

Goldie flags this as "a small detail that will bite you if you don't know it." He's right. It's exactly the kind of gotcha that shows up three weeks into production when you're debugging why your quality check suddenly started missing issues.

This is the tension in every developer tool: flexibility versus footguns. Claude Code is giving users more control. Whether that control makes workflows better or just gives people more ways to misconfigure them remains an open question.

What This Actually Means

Step back from the feature itself and ask what problem it solves. The core issue is that AI systems operate in a fundamentally different cost structure than traditional software. Traditional code runs the same instructions repeatedly at negligible marginal cost. AI model calls cost money every single time, and the cost scales with cognitive load.

Configurable effort levels are an attempt to make that cost structure more manageable by letting users match expense to task importance. It's treating AI as a resource to be budgeted, not a magic black box that just works.

Whether that's the right frame depends on what you're building. If you're running production automation at scale, granular cost control probably matters. If you're an individual using Claude Code for occasional tasks, adding another configuration layer might be overkill.

The update is genuinely practical for people building serious AI agent workflows. It's also another piece of complexity in a space already drowning in it. Both things can be true.

What interests me more is what comes next. If configurable effort levels are the current answer, what happens when the models get good enough that users shouldn't need to think about this? Or when the cost differentials shrink enough that optimization stops mattering?

We're still in the era of AI development where people are learning what levers actually need to exist. Some will stick. Some will become vestigial as the technology matures. Effort levels might be permanent infrastructure or a transitional solution to a temporary problem.

Right now, they're a tool. Use them if they solve your problem. Ignore them if they don't.

Bob Reynolds is Senior Technology Correspondent for Buzzrag

Watch the Original Video

New CLAUDECODE SKILLS Update Is INSANE!

New CLAUDECODE SKILLS Update Is INSANE!

Julian Goldie SEO

8m 10s
Watch on YouTube

About This Source

Julian Goldie SEO

Julian Goldie SEO

Julian Goldie SEO is a rapidly growing YouTube channel boasting 303,000 subscribers since its launch in October 2025. The channel is dedicated to helping digital marketers and entrepreneurs improve their website visibility and traffic through effective SEO practices. Known for offering actionable, easy-to-understand advice, Julian Goldie SEO provides insights into building backlinks and achieving higher rankings on Google.

Read full source profile

More Like This

Related Topics