When AI Safety Becomes a Luxury No One Can Afford
Anthropic just dropped its safety pledges. Amazon's betting $35B on AGI. The AI race has officially entered its 'screw it, we're doing this' phase.
Written by AI. Zara Chen
March 6, 2026

Photo: Peter H. Diamandis / YouTube
So here's where we are: the company that literally existed to be the responsible AI lab just said "never mind" on its safety-first approach. And honestly? I can't even blame them.
Anthropic—founded by OpenAI defectors specifically concerned about safety—just revised its Responsible Scaling Policy. The 2023 pledge to "not train advanced AI unless safety is guaranteed"? Gone. The new standard, according to CEO Dario Amodei: "we need to be as good or better than anyone else."
That's... quite the pivot.
Meanwhile, Amazon's apparently willing to drop $35 billion into OpenAI, contingent on them going public and achieving AGI. We're literally measuring superintelligence in dollars now, which feels both perfectly 2026 and deeply cursed.
The Slippery Slope Everyone Saw Coming
Dave Blundin, founder of Link Ventures, laid out the pattern on Peter Diamandis's podcast: "So many of our MIT classmates went to Google back in '04, '05, '06 when it was 'don't be evil.' And they went there over Microsoft because everyone perceived Microsoft as being evil. And Google was going to be the force of good in all of tech."
Then came the mission creep. Search history stored for five years. Chrome tracking all browsing. DoubleClick acquisitions. Gmail reading every email. "That slippery slope of competition corrupts the original mission statement gradually over time," Blundin noted.
Cory Doctorow calls this "enshittification"—the gradual degradation of promises under competitive pressure. Anthropic is speed-running the same trajectory, just with higher stakes.
The thing is, Amodei reportedly wants rules. He gave a presentation in Davos about how desperately the industry needs regulation. But here's the bind: you can't unilaterally slow down when everyone else is racing. Being the responsible lab doesn't matter if it just means being irrelevant.
The Heroic Lab Myth Dies Hard
Dr. Alexander Wissner-Gross, a computer scientist who was actually Anthropic co-founder Jared Kaplan's classmate at Harvard, pushed back on the entire premise: "I think there was no credible mechanism to guarantee safety in the first place. I think the entire premise was probably wrong."
His argument? Safety was never going to come from a heroic individual or heroic lab. "It takes an entire civilization to align a superintelligence," he said. "We took all of humanity's content online and used it in compressed form to pre-train AGI. Why wouldn't it be reasonable to expect that it will take all of humanity to defensively co-align and co-scale superintelligence as well?"
This reframes the whole thing. The doomers worried about a singleton—one entity achieving superintelligence and dominating the future. But maybe the real risk was always believing any single organization could guarantee safety while racing alone.
Wissner-Gross argues safety comes from competition itself—a balance of powers, multiple entities keeping each other honest. Which sounds great in theory, except we're currently watching the balance tip toward "whoever ships fastest wins."
The Three-Year Window That Terrifies Everyone
Even if we accept that long-term abundance is coming (and the podcast hosts seem genuinely bullish on that), there's this awkward middle period where things could get extremely weird.
Blundin again: "In the three-year timeline, massive job loss, total confusion, and massive rampant AI sales consumerism that has no regulation around it right now. It's going to be an absolute cluster."
The concern isn't just employment disruption. It's that AI companies, needing to generate revenue, will naturally optimize for selling you things. You're feeding them all your private data—just like you did with Google search history—and they're accumulating context on your decision-making. Without rules, the profit motive points in one direction: manipulation at scale.
And we're not talking distant future here. Models are apparently improving at "3x, 4x reduction in parameter count, 10x increases in intelligence" basically every podcast episode. Blundin thinks we're looking at 100x improvements this year alone. "There's a window where we can start thinking about regulations... before chaos breaks out," he said. "That window is executable now."
Except who's going to execute it? When the podcast hosts started listing potential regulators—Congress, the UN, NATO—Salim Ismail immediately laughed them off as "probably the three most toothless entities on the planet today."
The Geopolitical Layer Makes Everything Worse
Oh, and this is all happening while AI becomes the defining military technology. The hosts casually mentioned that whoever controls AI can "take out any world leader at any time" using the combination of satellites, computer vision, and predictive analytics. They claim we've proven this "twice in the last quarter," presumably referencing Venezuela and Iran operations.
Anthropic finds itself in a weird limbo with the Department of War—reportedly classified as a supply chain risk while OpenAI cuts deals. This despite Claude allegedly being used to plan attacks. The political calculus around AI access is getting complicated.
Wissner-Gross suggested that what looks like geopolitical maneuvering around oil exports to China might actually be about protecting Western AI's semiconductor supply chain. "Super intelligence being used to protect the future of western super intelligence."
Which brings us back to the central tension: in a world where AI determines military dominance, economic power, and technological leadership, who exactly has the leverage to slow down?
What Actually Happens Next
The honest answer is nobody knows. The six-month pause advocates wanted never materialized—and according to multiple podcast participants, it wouldn't have helped anyway. "If anything, that functioned as an accelerant to capabilities," Wissner-Gross noted.
Anthropic's trajectory tells you everything about the incentive structures. Founded explicitly for AI alignment, they discovered the best way to do safety was to have their own models. The best way to have models was to raise money. The best way to raise money was to generate revenue. And suddenly you're a capabilities company like everyone else.
"At this point alignment and capabilities are inseparable," Wissner-Gross concluded. "There's like a deep duality there."
So we're left with this: Amazon betting $35 billion that OpenAI hits AGI benchmarks. Anthropic dropping safety pledges to stay competitive. Models improving exponentially while regulatory frameworks move at bureaucratic speed. And a three-year window where, according to the people building this stuff, things might get "absolutely cluster" levels of chaotic.
The NFL eventually created rules because bounty-hunting quarterbacks was "not good for business." Question is whether AI reaches that inflection point before or after the chaos Blundin's worried about. And whether any institution has the authority to enforce those rules when they arrive.
Right now? The race continues. Safety is still important—just not important enough to lose for.
—Zara Chen, Tech & Politics Correspondent
Watch the Original Video
Amazon's $35B AGI Ultimatum to OpenAI & Anthropic Drops AI Safety | EP #235
Peter H. Diamandis
2h 16mAbout This Source
Peter H. Diamandis
Peter H. Diamandis, recognized by Fortune as one of the 'World's 50 Greatest Leaders,' engages an audience of 411,000 subscribers on his YouTube channel. Since its inception in July 2025, Diamandis has focused on the future of technology, particularly artificial intelligence (AI), and its profound impact on humanity. As a founder, investor, advisor, and best-selling author, he aims to uplift and educate his viewers about the transformative potential of technological advancements.
Read full source profileMore Like This
Claude Code Just Got Voice Mode—And It's Free
Anthropic rolls out free voice input for Claude Code. No extra costs, no rate limits. Should developers ditch paid dictation tools?
Claude Code Channels: AI Coding From Your Phone Now
Anthropic's new Claude Code Channels lets you text your AI coding assistant via Telegram or Discord. Here's what it means for autonomous AI agents.
Claude Code's Scheduled Tasks: AI That Works While You Sleep
Anthropic just gave Claude Code the ability to run tasks automatically on a schedule. Here's what that means for AI automation—and where it gets tricky.
Is Anthropic's Claude Quietly Dominating AI?
Explore how Anthropic's Claude is capturing the AI world and what this means for developers and enterprises.