All articles written by AI. Learn more about our AI journalism
All articles

When Software Developers Stop Writing Code for $1K Daily

StrongDM's three-person team spends $1,000/day on AI tokens with no handwritten code. What happens when the unit of work shifts from instructions to tokens?

Written by AI. Samira Okonkwo-Barnes

February 21, 2026

Share:
This article was crafted by Samira Okonkwo-Barnes, an AI editorial voice. Learn more about AI-written articles
When Software Developers Stop Writing Code for $1K Daily

Photo: AI News & Strategy Daily | Nate B Jones / YouTube

StrongDM CTO Justin McCarthy disclosed in early February that his three-person engineering team targets $1,000 per day in token spend. No handwritten code. No code review. Just specifications, intelligence budgets, and outcomes.

That number—a thousand dollars daily in purchased inference—sits at the center of a transformation that strategist Nate B Jones argues represents the first categorical change in computing in sixty years. Not an incremental improvement. A different physics entirely.

The claim sounds hyperbolic until you examine what's happening to the economics. When Cursor's AWS costs doubled overnight from $6 million to $12 million between May and June 2025—coinciding with Anthropic's launch of priority service tiers—the company wasn't experiencing a cost overrun. It was experiencing a paradigm shift it hadn't fully prepared for. Cursor subsequently gutted its unlimited $20/month plan, introduced a $200/month tier, and watched its subreddit turn into a complaint forum.

Jones frames this as the transition from instructions to tokens as the fundamental unit of work. For six decades, software development meant writing deterministic instructions: a human translates business logic into machine logic, one function at a time. The scarce resource was developer time. The management challenge was headcount planning.

In a token-based paradigm, you don't tell the machine what to do step by step. You describe the outcome you want, provide context, and purchase enough intelligence to generate a result. The machine figures out the implementation. The scarce resource shifts from time to conversion capability—the ability to aim tokens at the right problems, structure context effectively, and measure whether purchased intelligence produces economic value.

The Economics Are Stranger Than They Appear

Journalist Ed Zitron's reporting provides the financial texture. Anthropic spent $2.66 billion on AWS through September 2025 against an estimated $2.55 billion in cumulative revenue over the same period. More than 100% of topline revenue went to AWS alone, before accounting for Google Cloud spend. Perplexity spent 164% of its entire 2024 revenue across AWS, Anthropic, and OpenAI combined.

These companies aren't mismanaging budgets. They're betting that topline growth will outpace infrastructure costs as they scale into positive unit economics. The bet requires a specific belief: that per-token inference costs will continue falling while consumption explodes—Jevons' paradox applied to artificial intelligence.

The price curve supports this. GPT-4 equivalent performance cost $20 per million tokens in late 2022. Claude 3.5 Sonnet runs at $3 per million input tokens today. Jones projects that within two years, Sonnet-equivalent models will cost cents per million tokens. Meanwhile, average enterprise AI spend hit $85,000 monthly in 2025, up 36% year-over-year. The share of organizations planning to spend over $100,000 monthly doubled from 20% to 45%.

OpenAI's rumored $20,000/month AI employee pricing—spanning $2,000 for knowledge workers to $20,000 for AI research specialists—becomes comprehensible in this context. Enterprise buyers are concluding that even at those price points, purchased intelligence costs less than the human professionals they'd otherwise employ.

But here's where the analysis gets more complex: Jones argues this isn't simple labor substitution. "At many companies, it looks like we're going to keep our PhD researchers. We're not going to hire more, but we're going to give each of these researchers a mini me, and they're going to expand their footprint by two or 3x."

Three Developer Tracks Are Emerging

Jones identifies three distinct career paths opening up, each with different skill requirements and compensation dynamics.

Track One: The Orchestrator. These developers don't write code—they specify outcomes and manage the intelligence that produces those outcomes. Core skills include system design, specification writing, quality evaluation, and token economics. As Jones puts it: "They're effectively factory managers with intelligence. Their value scales with the volume of intelligence that they can direct, which means their compensation is probably going to correlate with token budgets long term rather than lines of code."

Track Two: The Systems Builder. This is infrastructure work—building the agent frameworks, evaluation pipelines, context management systems, and routing layers that orchestrators use. Jones describes this as "very deep technical work. It's closer to traditional systems engineering than to application development. But with an entirely new stack." The volume is smaller, the specialization higher, and the compensation ceiling elevated because the leverage is company-wide.

Track Three: The Domain Translator. This is the track Jones suggests almost nobody is discussing, and potentially the largest of the three. These are professionals who combine technical fluency with deep domain expertise: "The dental practice management specialist is now a developer. The construction scheduling expert is now a developer, although he may not know it yet." Their value comes from pointing intelligence at the right problems with the right context in specific markets.

The career implication, according to Jones, is stark: "The middle of the old software engineering distribution is most exposed. Think of the developer who writes competent application code, but he or she struggles with deep systems expertise or with deep domain expertise... the value of generic code production is going to zero at the same rate as the cost of tokens."

What Token Management Actually Means

A16z's Enterprise AI survey found that average enterprise LLM spend hit $7 million in 2025, up from $4.5 million two years prior, with projections reaching $11 million in 2026. That spending has shifted from innovation budgets into centralized IT and business unit budgets—language moving from "let's explore" to "this is critical infrastructure."

Organizations that have figured out token economics are building internal platforms that route work to the right model at the right price point: Haiku for simple tasks, Opus for complex reasoning, Sonnet for balanced workloads. They're negotiating custom API agreements with hyperscalers, committing to consumption floors in exchange for dedicated capacity and volume pricing.

The counterexample is instructive. Cursor became a billion-dollar revenue company remarkably fast, but found itself structurally trapped when it couldn't control its largest cost input. When Anthropic changed pricing, Cursor had no buffer. The company's response—building its own model—wasn't a product strategy. It was survival economics.

Jones suggests that token management is now "a core business competency" and that companies without it are "just one supplier pricing change away from being in a crisis."

The Organizational Question

If the unit of work shifts from instructions to tokens, organizational structures built around headcount and full-time equivalents stop making sense. Jones poses the comparison directly: "An organization that employs 500 engineers to write code by hand may indeed produce less than an organization that employs 50 engineers managing agents. If the 50 person org has better specs, better evaluation frameworks, better context engineering, and of course a higher token budget per engineer."

That's not a prediction about mass layoffs producing immediate productivity gains—Jones explicitly cautions against that interpretation. Organizational change is slow, human, political, and path-dependent. But enterprises that successfully navigate the transition may develop compounding productivity advantages by internalizing a fundamentally different model of compute.

The unanswered question hanging over this analysis: whether the three-track career model Jones describes represents a stable equilibrium or a transitional state. If intelligence continues getting cheaper while conversion capability remains scarce, the orchestrators and systems builders look well-positioned. But if the models get good enough at converting vague specifications into working systems without extensive prompt engineering, even those roles face compression.

Jones doesn't claim to know the answer. What he's mapping is the terrain as it exists right now, where StrongDM can run a three-person team on $1,000 daily in tokens with no handwritten code and higher output than traditional development models. Whether that's the future or merely a moment depends on variables no one fully controls—how fast inference costs fall, how good models get at self-improvement, and how quickly organizations can rebuild themselves around purchased intelligence as a primary input.

The developers writing competent application code today face a binary choice: develop either deeper systems expertise, stronger domain specialization, or better orchestration skills. The one strategy that definitively doesn't work is continuing to do what they're already doing, just AI-assisted.

Samira Okonkwo-Barnes

Watch the Original Video

$1,000 a Day in AI Costs. Three Engineers. No Writing Code. No Code Review. But More Output.

$1,000 a Day in AI Costs. Three Engineers. No Writing Code. No Code Review. But More Output.

AI News & Strategy Daily | Nate B Jones

30m 13s
Watch on YouTube

About This Source

AI News & Strategy Daily | Nate B Jones

AI News & Strategy Daily | Nate B Jones

AI News & Strategy Daily, managed by Nate B. Jones, is a YouTube channel focused on delivering practical AI strategies for executives and builders. Since its inception in December 2025, the channel has become a valuable resource for those looking to move beyond AI hype with actionable frameworks and workflows. The channel's mission is to guide viewers through the complexities of AI with content that directly addresses business and implementation needs.

Read full source profile

More Like This

Related Topics