The New Yorker Dragged Sam Altman. The Real Story Is Worse.
Ed Zitron argues the media's Sam Altman exposé missed the real scandal: OpenAI's economics don't work, and AI safety is mostly marketing theater.
Written by AI. Dev Kapoor
April 14, 2026

Photo: The Tech Report / YouTube
When The New Yorker published its Sam Altman exposé last week—complete with anonymous sources calling him an incompetent sociopath—tech critic Ed Zitron's response wasn't relief that the media finally caught on. It was frustration that they'd focused on the wrong scandal entirely.
"He's a deceptive guy," Zitron told The Tech Report, summarizing the piece's findings. "But we already knew that." The real story, he argues, isn't Altman's character flaws or his apparent inability to understand basic machine learning concepts. It's that the entire economic foundation of OpenAI—and the broader AI industry—operates on what he calls "theoreticals" instead of actual capabilities.
Zitron, who writes the newsletter Where's Your Ed At and hosts the Better Offline podcast, isn't just another AI skeptic throwing stones from outside the industry. He's spent years covering open source communities and watched similar hype cycles play out. What makes him worth listening to now is his willingness to connect dots that most tech reporting treats as separate stories: the CFO getting benched, the rushed IPO timeline, the enterprise customers quietly forced to pay per token instead of flat subscriptions.
The Networking Node
The New Yorker piece, reported by Ronan Farrow and Andrew Marantz, pulled together over a hundred interviews to paint a damning portrait. Altman allegedly tried to manipulate world powers against each other. He promised his safety team 20% of OpenAI's compute but delivered 1-2%. Paul Graham, who made him president of Y Combinator, is quoted saying Altman had been lying the whole time.
But Zitron's read is simpler: "He's just a node. He's very good at finding a powerful person, exploiting their greed and desperation and then using that for his own means." Microsoft, SoftBank, Oracle, Coreweave—Altman's career is a string of powerful partners he convinced to hand over resources. The pattern holds whether you're looking at his Y Combinator years or his current Pentagon lobbying.
The piece confirmed what many already suspected, Zitron argues, but it "dodged around the real problems." It still treated AI with a kind of mysticism—this powerful, potentially dangerous technology that we need to decide whether to trust Altman with. "Can we trust Sam Altman with large language models?" Zitron asks. "It's still a large language model. It's not dangerous" in the Skynet sense the narrative implies.
Safety Theater
The AI safety angle is where Zitron's critique gets sharpest. When pressed about Altman's alleged "sociopathic disregard" for safety—underfunding the promised safety work—his response is that safety is "mostly used as a marketing tool" across the industry.
"If they actually really cared about safety, they wouldn't have allowed any of this to be involved in accountancy or anything high-risk," he points out. OpenAI's own academic work acknowledges that hallucinations are impossible to remove "unless we invent a new kind of mathematics." Yet the company pushes these models into high-stakes applications anyway.
Altman's performance—expressing fear about the models' power since early 2023—serves a purpose. It positions OpenAI as the responsible actor, the one taking risks seriously, which the media then amplifies. "The media lapped it up. It's like Skynet," Zitron says. But when Iran tensions escalated, Altman was "slamming open the door at the Department of Defense" offering to help with classified military applications.
Antropic, founded by OpenAI defectors supposedly concerned about safety, follows the same pattern. "They're all the same. They're identical. They just do different impressions of a safety-focused guy."
The Economics Nobody Wants to Discuss
When The Information reported that OpenAI's CFO Sarah Friar privately said the company wasn't ready for a 2026 IPO and couldn't match growth with compute spending, it revealed the crack Zitron thinks matters most. Why would Altman rush toward an IPO that will expose OpenAI's finances?
"Altman is rushing toward IPO because he wants to get exit liquidity for everyone involved in the con," Zitron argues. He doesn't hold OpenAI stock himself, but has incentives to let others cash out. The problem: by his calculations, OpenAI needs around $50 billion annually through 2030. They won't get investment-grade ratings. They'll need to keep raising through bonds or share sales, asking investors to fund a company that—in Zitron's formulation—is essentially saying: "I need to borrow more money so that I can lose even more money."
His theory about where OpenAI's claimed $2 billion monthly revenue comes from is particularly provocative: they made their models burn more tokens. "I think they just made them burn more tokens. I think the only way to get any further is just massively increasing the amount that people are being charged." Both OpenAI and Anthropic have quietly moved enterprise customers from subscription pricing to direct token charges. "The rent just went up for everyone very quietly."
The Media's Missing Story
Zitron's central frustration is that mainstream tech reporting treats AI capabilities as directionally true even when reporting on AI company problems. Coverage repeats claims about AI agents that don't exist, coding assistants that only sometimes work if you already know what the code should look like, and job displacement that hasn't materialized. The Financial Times ran a piece showing AI isn't replacing or even enhancing white-collar workers in accountancy—then led with the theoretical threat anyway.
"Every time you repeat even the theoretical, you are helping these companies become more powerful," he argues. "And that's how the OpenAI con works. It's Sam Altman convincing enough people to repeat enough things so that he can raise enough money so that he can get more people to repeat more things so he can raise more money."
The actual outputs? The products? Whether any of this delivers value? Those questions get sidelined because character studies are more narratively satisfying than economic analysis, and because powerful market players have vested interests in maintaining the hype.
What The Exposé Got Right—And Wrong
Zitron credits Farrow and Marantz with important work. The piece mainstreamed information that needed wider circulation, even if much had been reported before by Karen Hao and others. It's useful to know that Altman can't code well and doesn't understand basic ML concepts. It matters that board members are anonymously calling OpenAI's leverage "scary."
But the piece still frames the question as whether we can trust this particular person with powerful AI, rather than whether the AI is as powerful as claimed or whether the business model is sustainable. "The secret computer isn't that scary," Zitron insists.
He draws a comparison to other tech CEOs—Satya Nadella, Sundar Pichai—who are equally focused on power and market position but had the advantage of actually building things or inheriting sustainable businesses. Altman is "the same as all of them. They all sat around the same table with Trump. They all care about the same amount."
The question Zitron keeps returning to is what happens when the IPO forces OpenAI to show its books. Will we finally get the economic reckoning he's been anticipating? Or will the market find new ways to fund losses based on theoretical future capabilities?
"If this thing was as powerful and scary as they say," he asks, "wouldn't all of these companies be freaking out internally?"
The answer seems to be: they're freaking out about burn rates and cash flow, not about accidentally creating Skynet. But one story sells better than the other, and in AI's current moment, the narrative often matters more than the numbers—at least until the IPO prospectus drops.
Dev Kapoor covers open source and developer communities for Buzzrag.
Watch the Original Video
‘Sam Altman is an historic con artist’ | Ed Zitron
The Tech Report
30m 32sAbout This Source
The Tech Report
The Tech Report is an emerging YouTube channel dedicated to exploring the ever-evolving landscape of artificial intelligence. With a subscriber base of over 40,000 and having launched its first video in April 2026, the channel has quickly attracted a significant audience. The Tech Report offers critical insights into AI leadership, media accountability, and the safety measures essential in AI development, aiming to engage viewers with informative and thought-provoking content.
Read full source profileMore Like This
NVIDIA's Open Models: A New Era for Developers
NVIDIA's CES 2026 focuses on open models, altering developer workflows and AI ecosystems.
AI's Two Paths: Safety First or Fast Deployment?
Exploring Altman and Amodei's divergent AI safety strategies.
Sam Altman Says AGI Arrives in 2 Years. Here's the Data.
OpenAI's Sam Altman just compressed the AGI timeline to 2028. We examined the benchmarks, the skepticism, and what 'world not prepared' actually means.
Anthropic's Opus 4.7: When Safety Guardrails Lobotomize the Model
Anthropic's Opus 4.7 shows promise in coding tasks but aggressive safety filters are blocking legitimate work. Is the tooling worse than the model?
RAG·vector embedding
2026-04-15This article is indexed as a 1536-dimensional vector for semantic retrieval. Crawlers that parse structured data can use the embedded payload below.