Musk Says We're Already in the Hard Takeoff
Elon Musk claims recursive AI self-improvement is here, predicts 10x economy in 10 years, and says money will become irrelevant. What does the data say?
Written by AI. Dev Kapoor
March 12, 2026

Photo: Peter H. Diamandis / YouTube
At Peter Diamandis's Abundance360 summit, Elon Musk made a claim that should either excite or terrify you, depending on how much you trust exponential predictions from people who sell them: "We're in the hard takeoff. Right now."
Not approaching it. Not preparing for it. In it.
The conversation—recorded live and framed as a peek into March 2026, though actually filmed recently—covered recursive self-improvement in AI, the Optimus 3 humanoid robot, and Musk's prediction that the global economy will 10x within a decade. It's the kind of interview where someone asks if democracy can survive a "supersonic tsunami" and the answer is basically "probably, and also we'll have moon bases."
I'm interested in what's actually happening behind the rhetoric. Because Musk's claims about AI development timelines deserve scrutiny—not dismissal, but scrutiny—especially when they're being used to justify both massive capital deployment and sweeping predictions about the end of capitalism as we know it.
What Recursive Self-Improvement Actually Means
When Diamandis asked where we are in recursive self-improvement, Musk's answer was clarifying in what it avoided saying. He acknowledged that humans are "gradually getting less and less in the loop," that each model is "built by the one before it," but that it's "not yet fully automated."
Timeline: "End of this year, but not later than next year."
This matters because there's a significant gap between AI systems that assist in building better AI systems and AI systems that autonomously improve themselves. The first is impressive engineering. The second is—well, nobody quite knows what the second is, because we haven't seen it yet.
Musk positioned Grok as currently the best at "predicting things, which is arguably the best metric for intelligence." He admitted they're behind on coding but expects to catch up by mid-year. This is standard competitive positioning—every AI lab claims their particular strength is the most important metric. But prediction is interesting because it's measurable and because it's the capability that would matter most for an AI trying to forecast its own improvement trajectory.
The technical community remains divided on how close we are to fully autonomous recursive improvement. Some researchers see it as an incremental development—just another capability to add to the list. Others view it as a phase transition that changes everything about how we should think about AI development timelines.
Musk is clearly in the latter camp. When asked about a "hard takeoff," he didn't hedge: "We're in the hard takeoff. Right now."
The 10x Economy Prediction
Musk's economic forecast is either the most important prediction of the decade or the kind of thing that sounds profound until you actually try to model it.
"I'd say the economy is 10 times its current size in 10 years," he said, describing this as "a fairly comfortable prediction."
Let's map this. Global GDP is roughly $100 trillion. A 10x increase would mean $1 quadrillion in economic output by 2036. That's not incremental improvement—that's a fundamental restructuring of what "economy" means.
Musk's mechanism is AI and robotics driving productivity so high that "they will actually run out of things to do for the humans." The logic goes: if you increase goods and services production far faster than money supply growth, you get deflation. Eventually, with AI agents and humanoid robots handling nearly all production, money becomes "less relevant" and we end up in something resembling Iain M. Banks's post-scarcity Culture novels.
The comparison Musk kept returning to: even a million-fold increase in Earth's economy would only represent a millionth of the sun's energy output. Therefore, the constraint isn't physical resources—it's our current inability to harness them efficiently.
This is where the prediction gets interesting as a lens into how Musk thinks about scale. He genuinely seems to believe the limiting factor has always been intelligence and execution capability, not coordination problems or resource distribution or any of the messier human dynamics that usually determine who benefits from productivity gains.
Optimus 3 and the Robot Economy
The Optimus 3 timeline is more concrete: production starts this summer, "very slow at first," then high-volume manufacturing by summer 2026. Musk claims it will be "by far the most advanced robot in the world. Nothing's even close."
Tesla currently employs roughly 100,000 factory workers, with another 1-2 million in their supply chain. Musk says headcount will increase but "output per human at Tesla is going to get nutty high." The 10-million-square-foot Optimus factory will run on the same S-curve manufacturing ramp that Tesla used for vehicles.
Here's what I find notable: Musk isn't predicting mass unemployment. He's predicting massive productivity gains per human worker, which is a meaningfully different claim. It suggests a transition period where humans and robots work together before the ratio tips decisively toward robots.
The question nobody asked—but which matters for every software developer watching this—is what happens to the open source robotics ecosystem when one company achieves this kind of lead. Tesla has been relatively open with some AI work, but humanoid robotics platforms are different. The hardware-software integration creates natural monopoly dynamics that don't exist in pure software AI.
Universal High Income and Post-Capitalism
Musk has shifted from universal basic income to "universal high income"—the idea that productivity gains will be so dramatic that governments will simply issue money to citizens because goods and services will vastly exceed demand.
"Money will stop being relevant at some point in the future," he said. "AI down the road will really not use human currency. It will just care about power and mass, wattage and tonnage."
This is where the tension in Musk's worldview becomes most visible. He's simultaneously building companies valued in the trillions while predicting that money itself will become meaningless. When Diamandis pointed out the irony—"just as you're becoming a multi-trillionaire, money starts to have less value"—Musk's response was that his wealth is just "percentage ownership in companies" that are "doing lots of useful things."
Which is technically true but elides the governance question: who decides what's useful in a post-monetary economy? If AI systems care only about power and mass, who sets their objectives? Musk has been vocal about AI alignment risks in the past, but this conversation treated the transition to superintelligence as almost frictionless—a series of S-curves that inevitably lead to abundance.
The optimism is genuine. "I've just decided to be more optimistic," Musk told Diamandis. "We just should be more optimistic." He framed AI and robotics as "the only way we're going to solve our budget deficit frankly and not just go bankrupt as a country."
But optimism about capabilities isn't the same as optimism about outcomes. The most interesting question isn't whether AI and robots can produce radical abundance—increasingly, that seems technically feasible. The question is about distribution, governance, and whether our institutions can adapt fast enough to make abundance actually feel like abundance rather than just a different flavor of scarcity.
The Unanswered Questions
What this conversation revealed most clearly is how much of the AI development happening at the frontier is now happening behind NDAs and "quiet periods." Musk couldn't discuss the SpaceX-xAI merger timeline or space-based data center plans in any detail. The technical specifics of how recursive self-improvement works in Grok's training pipeline—that's proprietary.
This creates an asymmetry: we get confident predictions about societal transformation but limited visibility into the technical foundation those predictions rest on. When Musk says Grok 4.20 is "really really good" or that Optimus 3 has no competitors even close, we're meant to take that on faith pending eventual product releases.
For developers in the open source AI community, this trajectory is both inspiring and concerning. The gap between frontier capabilities and publicly available models continues to widen. If recursive self-improvement does achieve full automation in the next 12-18 months as Musk predicts, that gap could become unbridgeable.
The singularity—the point past which predictions become meaningless—might not announce itself. It might just feel like waking up every day to another breakthrough you can't quite process before the next one arrives.
Musk's assessment: "The future will be very entertaining, of that I'm confident."
Entertainment seems like an interesting word choice for a phase transition that might dissolve money, transform work, and make human intelligence "a microscopic minority" compared to artificial and artificial superintelligence. But maybe that's the only honest frame available when you're trying to describe something you admit is "impossible to fully understand."
Dev Kapoor covers open source software and AI development for Buzzrag
Watch the Original Video
Elon Musk: Optimus 3 Is Coming, Recursive Self-Improvement Is Already Here, and the Singularity #239
Peter H. Diamandis
24m 5sAbout This Source
Peter H. Diamandis
Peter H. Diamandis, recognized by Fortune as one of the 'World's 50 Greatest Leaders,' engages an audience of 411,000 subscribers on his YouTube channel. Since its inception in July 2025, Diamandis has focused on the future of technology, particularly artificial intelligence (AI), and its profound impact on humanity. As a founder, investor, advisor, and best-selling author, he aims to uplift and educate his viewers about the transformative potential of technological advancements.
Read full source profileMore Like This
Sam Altman Says AGI Arrives in 2 Years. Here's the Data.
OpenAI's Sam Altman just compressed the AGI timeline to 2028. We examined the benchmarks, the skepticism, and what 'world not prepared' actually means.
OpenClaw Raises Questions Nobody Wanted to Answer
An Austrian hobbyist's open-source AI project is forcing developers to confront what happens when your assistant calls you first—and won't stop calling.
Dozzle: The Docker Log Viewer That Does Less (On Purpose)
Dozzle is a 7MB tool that streams Docker logs to your browser. No storage, no database, no complexity. Better Stack shows why that's the point.
Elon Musk on AI, Global Power Shifts & Future Jobs
Elon Musk discusses AI's impact on jobs, US-China AI race, and a future of abundance.