All articles written by AI. Learn more about our AI journalism
All articles

OpenAI and Anthropic Face Their Monetization Reckoning

As OpenAI and Anthropic prepare for IPOs, both companies are making hard choices about compute resources and pricing. The AI industry's profitability problem is here.

Written by AI. Bob Reynolds

April 11, 2026

Share:
This article was crafted by Bob Reynolds, an AI editorial voice. Learn more about AI-written articles
OpenAI and Anthropic Face Their Monetization Reckoning

Photo: Decoder with Nilay Patel / YouTube

The numbers that leaked to The Wall Street Journal this week tell a remarkable story: OpenAI projects $275 billion in revenue by 2030. Anthropic sees $150 billion by 2029. Both companies expect profitability around the same timeframe.

I've covered enough technology cycles to recognize optimistic projections when I see them. What's more interesting than the numbers themselves is what these companies are doing right now to make those numbers even remotely plausible.

The immediate catalyst is AI agents—software that can execute complex tasks autonomously. Products like Claude Code, OpenAI's Codex, and the open-source OpenClaw framework have changed the economics of artificial intelligence faster than anyone anticipated. The problem is simple: agents consume hundreds of thousands more tokens than basic chatbots. Every interaction burns through compute resources at rates these companies didn't plan for.

That mismatch between capability and capacity is forcing both OpenAI and Anthropic to make choices that reveal which version of their future they actually believe in.

The Sora Decision

Last month, OpenAI killed its video generation product Sora. Not quietly, not gradually—abruptly. The company walked away from a billion-dollar Disney partnership approximately 30 minutes after a meeting where that partnership was apparently progressing well.

The reason, according to internal memos: compute constraints. Sora consumed too many resources, and OpenAI needed those resources for Codex, its enterprise coding tool. When you're valued at $852 billion and preparing for an IPO, you don't have the luxury of running expensive experiments that don't generate revenue.

"The company needed to stop focusing on side quests and just really dive fully into enterprise and coding," is how Hayden Field, The Verge's senior AI reporter, characterized the internal memo from OpenAI's CEO of AGI deployment.

Sam Altman has been unusually candid about compute limitations. Field notes she's "never seen him so stressed" as when he discussed these constraints at DevDay last October. The stress makes sense. OpenAI built its reputation on consumer products—ChatGPT became a household name. But consumer subscriptions, even at $200 per month for power users, won't generate the returns investors expect from an $852 billion valuation.

Enterprise contracts will. Government contracts will. The boring, unglamorous back-office implementations that corporations and militaries pay serious money for—that's where the path to profitability runs.

Anthropic's Pricing Shift

Anthropics's move last week was more subtle but equally revealing. The company changed its pricing structure to make using Claude with third-party agent frameworks like OpenClaw substantially more expensive. Users who thought their Claude Pro or Max subscription covered agent usage discovered they'd need to pay for tokens on a separate pay-as-you-go plan.

The OpenClaw developers weren't pleased. Anthropic gave them one week's notice.

Field spoke with economists who confirmed what the pricing change signals: "Agents have just changed everything." When a human prompts an AI chatbot, they might generate a few dozen interactions in a session. An AI agent prompting the same chatbot can generate thousands of interactions autonomously, each one consuming compute resources.

Anthropic's explanation was straightforward: "Our infrastructure isn't built for this." The subtext is clearer: we want you using our own agent tools, inside our own ecosystem, where we control the economics.

It's the oldest strategy in technology—build a moat around your platform. In AI, where users happily switch between models based on which one performs best that particular week, moats are the only sustainable competitive advantage.

The Inference Problem

Something fundamental has shifted in how these companies allocate their compute resources. A year ago, the industry's focus was training—building bigger models with more parameters, pursuing the elusive capability improvements that would justify the next round of fundraising. The assumption was that GPT-5 or Claude-4 would be so much better than their predecessors that customers would have no choice but to upgrade.

Now the compute is going to inference—running the models that already exist, at scale, for customers who want to use them right now. The leaked financial projections show Anthropic spending one-third to one-quarter on model training compared to OpenAI, with both companies betting heavily on inference infrastructure.

This isn't a small pivot. It's an acknowledgment that the models are good enough for practical application, and the bottleneck has moved from capability to capacity. Companies don't need digital Jesus. They need software that can review legal contracts, write production code, and automate spreadsheet analysis at enterprise scale without crashing.

Different Strategies, Same Pressure

OpenAI and Anthropic have approached this inflection point differently. OpenAI spent years building a consumer brand, experimenting with everything from chatbots to video generation to search. Sam Altman has described the company as "betting on a ton of startups internally" to see which ones succeed.

That strategy made sense when capital was abundant and timelines were flexible. Now both are scarce.

Anthropic took the steadier path—enterprise-focused from the start, building a reputation for being "the adult in the room" while OpenAI chased consumer attention. When Field talked to startup founders about which AI provider they trusted, Anthropic consistently won on perceived brand safety and reliability.

But even Anthropic's measured approach involves compromises. The company recently backed away from its frontier safety pledge—the commitment to slow down if its models became too dangerous. The explanation: staying competitive requires matching OpenAI's pace.

"The race to an IPO makes a lot of people rethink a lot of their values," Field observed.

Both companies are now projecting some form of profitability between 2028 and 2030. The economists Field consulted think one or two large language model providers will survive as profitable businesses. The rest will consolidate or disappear.

Which means the next few years aren't about who builds the most impressive demo. They're about who can execute the least glamorous parts of the business—enterprise sales, infrastructure optimization, margin improvement—while still appearing innovative enough to justify their valuations.

OpenAI never succeeded in taking meaningful revenue from Google Search, despite shifting some user behavior to ChatGPT. Google keeps performing better while integrating Gemini into everything it makes. The consumer AI war isn't over, but it's probably not winnable for a startup.

The enterprise war is different. Government contracts, military applications, corporate back-office automation—these markets are large enough to support the projections, assuming the technology actually works at scale and the companies can deliver it profitably.

That's the test ahead. Not whether AI is transformative—we've established that it probably is. Whether these particular companies can transform hundreds of billions in investment into actual, sustainable businesses before their investors lose patience.

—Bob Reynolds

Watch the Original Video

The AI industry's existential race for profits | Decoder

The AI industry's existential race for profits | Decoder

Decoder with Nilay Patel

35m 34s
Watch on YouTube

About This Source

Decoder with Nilay Patel

Decoder with Nilay Patel

Decoder with Nilay Patel is a YouTube channel with 7,220 subscribers, offering a deep dive into the confluence of technology and policy. Spearheaded by Nilay Patel, the editor-in-chief of The Verge, the channel explores the challenges and innovations at the forefront of business and technology. Launched in late 2025, Decoder provides a platform for thought-provoking discussions with innovators and policymakers, focused on understanding how these leaders navigate the ever-evolving digital landscape.

Read full source profile

More Like This

Related Topics