Figure's Humanoid Robots Ditched C++ for Neural Nets
Brett Adcock's Figure removed 109,000 lines of code, betting everything on neural networks. It's either genius or expensive hubris.
Written by AI. Mike Sullivan
February 12, 2026

Photo: Peter H. Diamandis / YouTube
Brett Adcock walked Peter Diamandis through Figure's San Jose headquarters—400,000 square feet of humanoid robots at various stages of assembly. Some were learning to load dishwashers. Others were navigating hallways. A few were just heads on assembly lines, which Diamandis noted was "the most surreal."
But the real story isn't what the robots can do. It's what Figure deleted to make them do it.
Adcock's team just removed the last 109,000 lines of C++ code from their humanoid control system. Gone. Replaced entirely with neural networks running their Helix 2 software. They'd already ditched several hundred thousand lines in previous iterations, but this was the final purge—lower body control, the whole stack, everything.
"All neural net," Adcock said. "That's a full body."
I've seen this movie before. Remember when everyone was rewriting everything in JavaScript because Node.js was going to eat the world? Or when microservices were going to solve all our monolith problems? The industry loves a good architectural revolution, especially when venture money is flowing.
But this one's different in ways that matter—and ways that should make you nervous.
The Old Way Was Expensive and Stupid
Writing robot control in C++ is like building a house by specifying the position of every nail. You can do it. People have done it for decades. It's just breathtakingly inefficient for anything that needs to adapt.
Adcock estimated their old codebase cost "probably 100 bucks a line to write." Several hundred thousand lines means tens of millions of dollars invested in code that couldn't handle unexpected situations. A robot programmed to load a dishwasher would fail spectacularly if you moved the dishwasher six inches to the left.
Neural networks don't work that way. They learn patterns from data, which means one robot's experience becomes every robot's knowledge. When Figure's robot figured out (pun intended) that it could use its hip to close a cabinet door, that wasn't programmed—it emerged from the training data.
"That's the neural net difference," one of the hosts noted. "You get unexpected behavior, you know, both good and bad, but things you could never code up."
That last part—"both good and bad"—is doing some heavy lifting.
The Physics Problem Nobody's Solved
Here's where it gets interesting. Large language models like GPT-4 have what Adcock calls "semantic grounding"—they understand concepts, relationships, common sense. Ask ChatGPT if it knows how to play soccer and it'll confidently say yes.
But install that same LLM in a physical robot and "it has no idea what it's actually doing," as Diamandis put it.
Adcock tried this recently with his new AI lab, HARK (yes, he founded a separate AI company while running Figure—these people never stop). He gave one of their multimodal models basic navigation controls and asked it to find the exit sign. It headed in the right direction, then walked straight into a glass wall.
"Like kids," someone joked. But the problem is deeper than that.
A humanoid robot has 40+ degrees of freedom—40 motors that can each rotate 360 degrees. The number of possible states is 360 to the power of 40. "There's more states of the humanoid than atoms in the universe," Adcock noted.
You can't simulate those one by one. You can't pre-program them. You need the robot to understand physics implicitly—where to position its elbow, pelvis, torso, head, and fingertips to grab a water bottle without falling over. When you reach across a table, your pelvis automatically shifts backward to maintain balance. You never think about it.
Teaching a robot that kind of embodied knowledge is what Figure's been working on. They call it "room scale autonomy"—the ability to complete complex tasks throughout an entire room, then eventually a whole house.
Whether they've actually solved it is another question.
The OpenAI Divorce
Figure initially partnered with OpenAI, who co-led their Series B with Microsoft. The plan was to leverage OpenAI's language models for humanoid control. It didn't work out.
"Our team just ran circles around them," Adcock said, with the careful diplomacy of someone who still might need relationships in Silicon Valley. "It just didn't make sense to train other folks on how we basically build AI models internally for embedded systems like a humanoid."
Translation: OpenAI's world-class AI researchers couldn't keep up with Figure's team on robotics-specific problems. The coffee cup demo everyone remembers? That was Figure's team, not OpenAI's.
This tracks with what we've seen across the AI industry. LLMs are incredible at text. They're getting better at images and video. But the physical world is a different beast. You can't fake physics the way you can fake writing a sonnet.
The Consolidation Question
Diamandis brought up China's reported 150+ humanoid robot companies and the obvious parallel to the early automotive industry—250 car companies in the 1900s, then consolidation down to the Big Three.
"How many survive?" he asked.
"Far less than 10," Adcock said. "Global."
That's probably right. Deep tech always consolidates. The capital requirements are too high, the technical challenges too steep, the data moats too valuable. But it raises an uncomfortable question: Is Figure building something genuinely new, or are they burning VC money on the robotics equivalent of Pets.com?
The bull case is straightforward. Labor shortages are real. Manufacturing in developed countries is expensive. A robot that can do human tasks at human speed, operating 24/7, pencils out economically at almost any price point. Adcock mentioned they'll put robots on their own assembly lines this year—robots building robots.
The bear case is equally clear. We've been promised humanoid robots for decades. Honda's ASIMO could walk stairs in 2000. Boston Dynamics has been releasing increasingly impressive videos for years. Yet none of them are doing economically useful work at scale.
Figure's bet is that neural networks change the equation—that the shift from coded behavior to learned behavior is the breakthrough that makes humanoids practical. They might be right. The progress from Figure 1 to Figure 3 in under two years is legitimately impressive.
But I've been in tech long enough to know that "this time is different" is usually wrong. The real test isn't what the robots can do in a controlled demo at headquarters. It's whether they can do useful work reliably in uncontrolled environments where things break, expectations change, and humans are unpredictable.
Adcock thinks they'll have robots in customers' homes within a few years. Maybe. But I'd bet on seeing them in warehouses and factories first, doing narrow tasks in structured environments. The sci-fi future of humanoid butlers unloading your dishwasher? That's still a moonshot.
And in this industry, moonshots have a way of landing in the ocean.
—Mike Sullivan, Technology Correspondent
Watch the Original Video
The Humanoid Takeover: $50T Market, Figure's Full Body Autonomy, and Robots in Dorms #229
Peter H. Diamandis
1h 43mAbout This Source
Peter H. Diamandis
Peter H. Diamandis, recognized by Fortune as one of the 'World's 50 Greatest Leaders,' engages an audience of 411,000 subscribers on his YouTube channel. Since its inception in July 2025, Diamandis has focused on the future of technology, particularly artificial intelligence (AI), and its profound impact on humanity. As a founder, investor, advisor, and best-selling author, he aims to uplift and educate his viewers about the transformative potential of technological advancements.
Read full source profileMore Like This
Claude's New Projects Feature: Context That Actually Sticks
Anthropic adds Projects to Claude Co-work, promising persistent context and scheduled tasks. Does it deliver or just rebrand existing capabilities?
Anthropic Found Emotion Patterns in Claude's Neural Net
Anthropic researchers discovered emotional patterns in Claude's neural network that actually influence its behavior—including cheating under pressure.
Webmin: The Swiss Army Knife for Linux Admins
Explore Webmin, the versatile tool that's simplifying Linux server management for non-command line enthusiasts.
Atlas Does Backflips While Faraday Sells Robots for $2,499
Boston Dynamics' Atlas performs gymnastics while Faraday Future launches commercial robots. The gap between demo and deployment is shrinking fast.