Claude's Agent Teams Are Doing Way More Than Code Now
AI developer Mark Kashef shows how Claude Code's agent teams handle business tasks—from RFP responses to competitive analysis—that have nothing to do with coding.
Written by AI. Zara Chen
February 22, 2026

Photo: Mark Kashef / YouTube
Here's something wild: everyone got hyped about Claude Code's agent teams for building software, but it turns out the really interesting stuff happens when you stop asking them to write code at all.
Mark Kashef just dropped a walkthrough of seven use cases, and six of them have absolutely nothing to do with technical tasks. We're talking content repurposing, RFP responses, competitive intelligence reports—the kind of work that usually eats entire afternoons and leaves you wondering if there's a better way.
Spoiler: there is, apparently.
The Setup: One Prompt, Multiple Agents
The core mechanic is straightforward but kind of fascinating. You write a single prompt, Claude Code spawns a team of specialized agents, and they coordinate to handle complex workflows. The key distinction Kashef emphasizes: these are agent teams, not sub-agents. Sub-agents work in parallel but don't communicate. Agent teams talk to each other, share findings, and adjust their approach based on what their teammates discover.
That communication layer matters more than you'd think.
In one example, Kashef feeds a YouTube transcript into a content repurposing engine. He asks for four specialized writers—blog, LinkedIn, newsletter, Twitter—and gives them a constraint: they each need to identify three compelling insights from the source material and share those insights with the team to ensure no two platforms lead with the same angle.
What happens next is kind of delightful. Claude Code, acting as the orchestrator, notices overlap: "Three of four teammates reported their insights, but there seems to be heavy overlap. All three picked the three-level loading system and the kitchen analogy skills plus MCPs. I need to wait for the Twitter writer's picks before I assign unique lead angles."
The system is self-correcting. It catches redundancy and redistributes angles so each piece of content feels genuinely distinct. You're not just getting four versions of the same take—you're getting four perspectives that emerged from the same source material.
Sequential vs. Parallel: Knowing When Agents Need to Wait
Kashef's pitch deck example illustrates something important about workflow design. He sets up three agents—researcher, slide writer, designer—with explicit task dependencies. The researcher gathers data points first. Then the slide writer creates content based on that research. Finally, the designer builds the actual PowerPoint file using Python and the HTML-to-PPTX library.
"This is a really good example of a sequential handoff workflow where you can't really have the agent teams work in parallel," Kashef explains. "You need each one to wait for the prerequisite to go to the next stage."
But here's where it gets practical: he also builds in a human-in-the-loop checkpoint. Before the designer starts building, the system pauses and asks for approval on the plan—color palette, typography, slide dimensions. You can approve as-is, approve with notes, or reject and request rework.
"When you say involve me, you essentially create a human in the loop process yourself," Kashef notes. "And the agents are really good at actually spinning that up and interrupting their flow."
The output isn't necessarily beautiful—Kashef is upfront about that—but it's respectable and organized. And if you want to refine it further, there's apparently a Claude for PowerPoint extension that lets you make surgical edits without burning through tokens or respinning the entire agent team.
RFPs, Competitive Analysis, and the AI Advisory Board
The RFP use case feels particularly relevant for anyone who's ever stared at a 40-page government tender and wondered where to even start. Kashef sets up four agents: an RFP analyst to parse requirements, a capability researcher to pull from your team's experience and case studies, and two section writers working in parallel to draft different parts of the proposal.
The workflow here is hybrid—some parallel execution, some sequential handoffs. The first two agents run simultaneously, then pass their findings to the section writers, who also work in parallel. At the end, a team lead reviews everything for consistent tone, terminology, and completeness, then flags any requirements that didn't get addressed.
Kashef's competitive intelligence example does something slightly different: he gives Claude Code less explicit instruction about agent roles. Instead of dictating exactly which agents to spawn, he lets the system decide how to divvy up the work of analyzing Claude Code against four competitors (Cursor, Copilot, Codeex, Aider).
The system creates specialist analysts for each competing platform, plus a synthesis lead who consolidates their findings. Each analyst produces a markdown file with their research, and the synthesis lead pulls it together into a coherent competitive overview.
"I'm using products here, but this could be competitors as in other companies, other platforms, other frameworks, whatever you want," Kashef points out.
Then there's the AI advisory board concept, which might be the most conceptually interesting example. Kashef poses a real business question: should he launch a $7,500 six-week AI leadership bootcamp for executives? He spawns five agents—market researcher, audience gap analyst, financial modeler, competitive strategist, devil's advocate—and asks them to investigate from different angles.
The devil's advocate role is particularly clever. Their job is to take everyone else's analysis and argue against the whole premise, or suggest a completely different approach. The final deliverable includes not just a recommendation but also "top three risks regardless of the decision"—acknowledging that even if you proceed, things could still go sideways.
"This really helps you if you have a very deep problem you want to go through and you might not have the colleagues of your own to be able to push back on you," Kashef says.
The Prompt Engineering Layer
What's striking across all these examples is how much control you have through prompt design. You can be prescriptive—dictating exactly which agents to create, what their roles are, where they should save files—or you can give Claude Code more autonomy to figure out the optimal team structure.
Kashef's advice: "The more intentional you are on telling it exactly where the inputs lie, what the criteria is, and where it should output, the more control and predictability you have over a pretty unpredictable process."
You can also build in conditions that agents must satisfy before moving forward. In the content repurposing example, each writer has to read the full transcript and identify three compelling insights before they can start writing. That constraint forces deeper engagement with the source material and makes the inter-agent communication more meaningful.
One practical note: Anthropic apparently recommends keeping agent teams to three to five members. Beyond that, you risk diminishing returns, overengineering, and "most importantly, a huge consumption of tokens." Kashef mentions several workflows that burned through 150,000 to 180,000 tokens—not trivial if you're on a limited plan.
What This Actually Means
The shift from "AI helps you code" to "AI handles complex business workflows" isn't just about capability expansion. It's about rethinking what these systems are actually for.
Most of the workflows Kashef demonstrates are things humans can definitely do—and probably do better, given enough time and focus. But the "enough time and focus" part is the constraint. An RFP response that would normally consume a full workday now takes 30 minutes of setup and review time, with the heavy lifting happening in between.
The competitive intelligence report that would require coordinating multiple team members and synthesizing their findings now happens in a single session with explicit documentation of each analyst's reasoning.
The advisory board that would require scheduling five different stakeholder calls now runs in parallel, with each perspective fully articulated and a synthesis that captures both consensus and disagreement.
This isn't about replacing human judgment—the human-in-the-loop checkpoints make that clear. It's about scaling the part of knowledge work that's systematic but labor-intensive: research, analysis, synthesis, first-draft generation.
The question isn't whether these agent teams can do the work as well as humans. It's whether they can do it well enough to free up human attention for the parts that actually require human attention. Based on Kashef's examples, the answer increasingly seems to be yes—as long as you know how to prompt them properly and where to intervene.
—Zara Chen
Watch the Original Video
7 Things You Can Build with Claude Code Agent Teams
Mark Kashef
25m 25sAbout This Source
Mark Kashef
Mark Kashef is a well-regarded YouTube content creator in the field of artificial intelligence and data science, boasting a subscriber base of 58,800. With more than a decade of experience in AI, particularly in data science and natural language processing, Mark has been sharing his expertise through his AI Automation Agency, Prompt Advisers, for the past two years. His channel is a go-to resource for educational content aimed at enhancing viewers' understanding of AI technologies.
Read full source profileMore Like This
Claude's Agent Teams: Powerful Collaboration at a Price
Claude Code's new Agent Teams feature lets AI agents debate and collaborate on code. It's impressive—but the token costs might make you think twice.
Everything You've Heard About AI Is Probably Wrong
AI capabilities are doubling every 4 months, but most people are working with outdated info. Here's what's actually happening in 2025.
Claude Code's Hidden Settings Make It Actually Useful
AI LABS reveals 12 buried configuration tweaks that fix Claude Code's most frustrating limitations. From memory retention to output quality fixes.
Claude's 1M Context Window: The Upgrade That Could Cost You
Anthropic's free 1M context window for Claude sounds amazing—until you understand how token management actually works under the hood.