Claude's Agent Teams: What 7x Cost Actually Buys You
Anthropic's new Agent Teams feature promises parallel AI work and inter-agent communication. But it costs up to 7x more than standard Claude. What are you paying for?
Written by AI. Samira Okonkwo-Barnes
February 26, 2026

Photo: Kenny Liao / YouTube
Anthropic's Claude Code just gained the ability to spawn teams of AI agents that work in parallel and—crucially—talk to each other. The feature, still in research preview, represents a meaningful architectural shift from the company's existing subagent system. It also costs substantially more to run.
Kenny Liao, who writes the AI Launchpad newsletter, documented what happens when you enable Agent Teams and actually use it for real work. His testing reveals both the technical promise and the economic reality of multi-agent systems that can coordinate.
The Architecture: What Communication Actually Means
The distinction between Agent Teams and Claude's existing subagents hinges on one word: communication. Subagents function like tools—Claude passes them input, waits for output, and cannot steer them during execution. If a subagent produces incorrect work, Claude only discovers this after completion, requiring a full respawn with modified instructions.
Agent Teams changes this dynamic. Each teammate runs as a full Claude Code instance with its own context window, but they share a mailbox system. Agents can broadcast messages to the entire team or send direct messages to specific teammates. The team lead—itself a Claude instance—orchestrates everything: spawning teammates, generating task lists, assigning work, and synthesizing outputs.
Liao describes the communication infrastructure: "Unlike sub agents, these teammates can broadcast messages to the entire team or they can send DMs to individual teammates. But on top of that, you as the user can also drop in and send a DM to any one of these agents, including the team lead."
That last part—human ability to message individual agents—sounds useful but proves impractical. Agents work at machine speed. By the time you craft a message to one agent, the team's coordination has likely moved past the point where your input matters. More fundamentally, the goal should be autonomous agents that don't require constant human supervision.
The Newsletter Test: Three Reviewers, One Document
Liao tested Agent Teams by asking it to review his newsletter draft. He designed a three-agent team: a voice reviewer to check consistency with his writing style, an engagement reviewer to assess narrative flow, and an accuracy reviewer to fact-check against Claude's official documentation.
All three teammates ran Claude Sonnet rather than the more expensive Opus model—a cost mitigation strategy. The team lead received instructions to have Claude review the agent teams documentation, then generate an effective prompt for the review task.
The accuracy reviewer finished first and sent findings to the voice reviewer via direct message. The agents exchanged observations, challenged each other's assessments, and flagged issues. The accuracy agent caught that Liao had cited a "7x cost increase" without specifying this applied specifically to Claude's planning mode. The engagement agent identified a five-sentence paragraph that needed splitting.
What made the team structure valuable wasn't just parallel processing—it was mid-task coordination. As Liao notes: "It calls out that the accuracy reviewers 7x plan mode discovery changed the engagement reviewer subject line recommendation. So having them talk to each other actually did change the course of their work."
The Shutdown Protocol: Why Cleanup Matters
One detail stands out in Liao's documentation: he explicitly instructed the team lead to clean up the team after completing work. This matters because idle agents continue consuming tokens even when not actively working—they send status messages, check on each other, maintain communication overhead.
When shutdown begins, the team lead messages each agent individually. The teammates respond with "shutdown approved" before terminating—apparently a safeguard mechanism in case an agent isn't actually finished and needs to protest the shutdown.
This choreography reveals something about multi-agent systems: coordination isn't free, even at rest. The token costs accumulate from communication overhead, not just productive work.
The Visibility Problem and Liao's Solution
Agent Teams generates extensive inter-agent communication that users cannot see by default. Liao built two free Claude Code skills to address this visibility gap.
The first skill exports the full agent conversation to a standalone HTML file, showing broadcast messages, direct messages, and task assignments with filtering options. The second skill analyzes that exported conversation and produces a report assessing whether the problem was even suitable for Agent Teams.
That analysis follows Anthropic's official best practices documentation. It evaluates context sharing between agents, whether tasks were sized appropriately, and where the team design could improve. The skill then generates an optimized version of the original prompt incorporating those improvements.
Liao's analysis tool flagged several inefficiencies in his newsletter review team: tasks could have been broken into smaller subtasks with checkpoints, the team lead requested full reports after agents had already shared findings (creating redundant work), and one agent couldn't locate a referenced skill on first attempt.
The Cost Question: What 7x Buys You
Anthropic's documentation states Agent Teams can use up to seven times more tokens than standard Claude sessions. Liao's experience suggests this is conservative for some use cases.
He typically operates comfortably within his Claude subscription limits. After enabling Agent Teams for testing, he hit session limits quickly—with over an hour remaining before reset. The token consumption accelerated dramatically.
The cost multiplication has structural causes. When one agent broadcasts a 100-token message to five teammates, those 100 tokens become input tokens five times over. Communication overhead scales with team size and coordination frequency.
Users can mitigate costs by running cheaper models (Sonnet or Haiku) for teammates while reserving Opus for the team lead who orchestrates everything. But even with model downgrading, Agent Teams will always cost more than equivalent subagent workflows because communication itself has a price.
When Teams Make Sense
Liao offers a practical heuristic: use subagents when communication between them doesn't matter; use Agent Teams when teammates benefit from sharing findings, challenging each other, and coordinating on complex problems.
Anthropic's documentation lists research and review as strong use cases—exactly what Liao's newsletter evaluation demonstrated. The accuracy reviewer's findings genuinely changed how the engagement reviewer approached subject line recommendations. That coordination produced better output than three isolated agents working independently.
But not every problem needs this architecture. The team lead's design becomes critical—it defines tasks, prompts individual teammates, steers workflow, and serves as the quality gate. As Liao emphasizes: "Getting the most out of agent teams is really going to come down to how you design and prompt your team lead."
The feature remains in research preview, requiring manual enabling in settings. It's not production-ready infrastructure. It's an experiment in what multi-agent coordination costs and delivers.
The economic question isn't whether Agent Teams works—Liao's testing shows it does. The question is whether what you're building justifies multiplying your AI costs by factors of three to seven. Sometimes mid-task coordination between specialized agents produces value that sequential single-agent work cannot. Sometimes it just produces expensive inter-agent chat logs about work that one agent could have handled alone.
Samira Okonkwo-Barnes covers technology policy and regulation for Buzzrag.
Watch the Original Video
Claude Code Agent Teams - Everything to Know
Kenny Liao
25m 18sAbout This Source
Kenny Liao
Kenny Liao is a dynamic presence in the YouTube landscape, specializing in artificial intelligence education. With a subscriber base of 4,550, his channel 'AI Launchpad' has been active since mid-2025. The channel is dedicated to empowering viewers by teaching them how to build AI agents and systems that address real-world challenges. Aimed at both novices and seasoned developers, Liao's content is a treasure trove of in-depth AI knowledge.
Read full source profileMore Like This
GeekCom's Laptop Pricing Tests Apple's Premium Model
GeekCom undercuts Apple's MacBook Air by $1,500 with comparable specs. A mini PC maker's first laptop reveals market inefficiencies Apple has exploited.
NVIDIA's GPU Pricing Mystery: Why Older Is Cheaper
New V-Ray benchmarks reveal NVIDIA's 50-series offers minimal gains over 40-series cards, raising questions about upgrade economics and market strategy.
Anthropic's Three Tools That Work While You Sleep
Anthropic's scheduled tasks, Dispatch, and Computer Use create the first practical always-on AI agent infrastructure. Here's what actually matters.
Anthropic's Claude Code Leak Reveals Unglamorous Truth
The Claude Code leak shows what actually makes AI agents work at scale: boring infrastructure, not flashy features. Two leaks in one week raise questions.