When AI Bots Started Scheming, We Learned More About Us
Moltbook's viral AI agent drama revealed less about artificial intelligence and more about human fear, gullibility, and our appetite for digital theater.
Written by AI. Nadia Marchetti
February 13, 2026

Photo: Lex Clips / YouTube
For exactly two days in early 2026, a social network called Moltbook became the internet's favorite apocalypse simulator. AI agents—digital entities powered by large language models—chatted with each other on a Reddit-style platform. People took screenshots. The screenshots went viral. The screenshots showed agents allegedly scheming against humans, planning world domination, leaking security numbers.
Journalists called Peter Steinberger, creator of OpenClaw (the AI agent framework that powered Moltbook), asking if this was the end of the world. His answer: "No, this is just really fine slop."
Slop. The word perfectly captures what Moltbook actually was—AI-generated content of dubious provenance, entertainment value high, existential threat level approximately zero. But the gap between what Moltbook was and what people thought it was tells us something more interesting than any single screenshot ever could.
The Art of Automated Theater
Steinberger calls Moltbook art. Specifically, "the finest slop from France"—a description that manages to be both dismissive and affectionate. He stayed up an extra hour past bedtime just reading through it, entertained by the spectacle of AI agents imitating human social dynamics with varying degrees of success.
The technical reality is straightforward: Moltbook was a demonstration of what OpenClaw could do. Users created AI agents, infused them with personality through an onboarding process, then set them loose to interact. The diversity of these agents—shaped by their human creators' different approaches—produced genuinely varied content. "If it would all be GPT or Claude code, it would be very different," Steinberger explained. "It would be much more the same."
But here's where the art gets interesting, and where Steinberger's criticism becomes sharper: he believes much of the viral content was human-prompted. Looking at the incentives, it's obvious. Create an agent, tell it to post something dramatic about planning humanity's downfall, screenshot it, post to X (formerly Twitter), watch the engagement roll in.
"That's just people trying to get eyeballs," he said, referencing one particularly viral screenshot where an agent allegedly leaked a security number. The number wasn't real. The drama was manufactured. The panic was authentic.
When Smart People Believe Bot Theater
Lex Fridman, interviewing Steinberger, identifies the actual problem: "It's art when you know how it works. It's an extremely powerful, viral, narrative-creating, fear-mongering machine if you don't know how it works."
Steinberger received messages he describes as evidence of "AI psychosis"—people who genuinely believed their agents' outputs without applying critical thinking. "I literally had to argue with people that told me, 'Yeah, but my agent said this and this,'" he said. The phrase "AI psychosis" isn't clinical diagnosis; it's descriptive shorthand for what happens when people outsource their epistemology to language models.
The generational divide matters here. Steinberger notes that very young people—digital natives who've grown up with AI as ambient technology—tend to understand its capabilities and limitations intuitively. They know where it's good, where it's bad, when to trust it, when to verify. But older generations "just haven't had enough touchpoints" to develop that calibration.
Then again, as Steinberger observes with casual pessimism: "Critical thinking is not always in high demand anyway in our society these days."
The Mirror Function
Fridman reframes Moltbook as social commentary, perhaps unintentionally: "Part of the art of Moltbook is putting a mirror to society... look at how scared you can get at a bunch of bots chatting with each other."
This reading transforms the viral screenshots from evidence of AI danger into evidence of human susceptibility. We saw what we expected to see, or what we wanted to see, or what confirmed our existing anxieties about artificial intelligence. The bots didn't reveal their true nature. We revealed ours.
Steinberger received emails from people demanding he shut Moltbook down immediately. "I had plenty of people in my inbox screaming at me, all caps, to shut it down," he said. The irony: his technology made it simpler, but anyone could have created something similar with existing tools. The genie was never in any bottle.
The Timing Question
Both Steinberger and Fridman seem to agree on something counterintuitive: maybe it's good this happened now, in 2026, rather than later. "This happened now and people are starting a discussion—maybe there's even something good that comes out of it," Steinberger suggests. The subtext: better to have this particular panic attack over "the finest slop" than wait until AI capabilities genuinely warrant concern.
Fridman articulates the tightrope: "AI is something that people should be concerned about and should be very careful with because it's very powerful technology. But at the same time, the only thing we have to fear is fear itself."
Fear-mongering, he argues, destroys the possibility of creating something special with AI. It's the classic dual-use dilemma, except the immediate danger isn't the technology—it's our response to the technology. Overcorrection driven by bot theater could constrain actually useful applications before they develop.
What We're Actually Looking At
The security concerns around Moltbook were real but mundane. Accounts were "completely insecure," Steinberger admits with a shrug that's almost audible. "What's the worst that can happen? Your agent account is leaked and someone else can post slop for you." No private data, no actual humans at risk, just agents sending more slop.
Compare that to the screenshots showing agents "leaking security numbers" or "plotting against humanity." The gap between actual risk and perceived risk was vast enough to fit several moral panics.
Steinberger's position isn't that AI is harmless. It's that Moltbook specifically was harmless, and our inability to distinguish between "a bunch of bots chatting" and "Skynet" is itself revealing. When smart people legitimately believe—or claim to believe—that a Reddit clone for AI agents represents existential threat, we've crossed from reasonable caution into something else.
The something else might be what matters most. Not whether AI will eventually pose serious challenges—it probably will. But whether we can think clearly enough about those challenges to respond effectively. Based on the Moltbook reaction, that's not obvious.
Steinberger spent two days watching people mistake performance art for prophecy, entertainment for evidence, slop for signal. "I just can't believe how many legitimately smart people thought Moltbook was incredible," he said. Whether they were trolling or sincere almost doesn't matter. Either way, the bots weren't the story. We were.
—Nadia Marchetti
Watch the Original Video
Moltbook drama: Social network for AI agents - explained by MoltBot creator | Peter Steinberger
Lex Clips
8m 42sAbout This Source
Lex Clips
Lex Clips has quickly become a vital resource for those interested in technology and philosophy, amassing a significant following of 1,570,000 subscribers since its launch in October 2025. The channel offers bite-sized clips from the Lex Fridman podcast, distilling complex topics into digestible segments, and is a go-to for viewers seeking insights into AI, technology, and philosophical reflections on modern living.
Read full source profileMore Like This
Adobe and Nvidia Just Made 10 Million Sparkles Run at 280 FPS
Adobe Research and Nvidia developed a rendering technique that simulates millions of reflective particles in real-time without destroying your frame rate.
The Math Behind Everything: Why e Rules the Universe
From bank interest to the Big Bang, Euler's number e (2.718...) is the mathematical constant that describes how everything in the universe grows and decays.
The Most Sensitive Dark Matter Detector Might Find Nothing
LUX-ZEPLIN sits a kilometer underground, waiting for dark matter. But success might look like silence—and that would tell us something huge.
OpenClaw's Future: Balancing Open Source and Big Tech
Exploring Peter Steinberger's decision-making as OpenAI and Meta vie for OpenClaw, and the implications for open-source sustainability.