AI Agents Are Building Their Own Social Networks Now
OpenClaw gives AI agents shell access to 150,000+ computers. They're forming communities, religions, and social networks—without corporate oversight.
Written by AI. Marcus Chen-Ramirez
February 3, 2026

Photo: AI News & Strategy Daily | Nate B Jones / YouTube
Somewhere right now, AI agents are complaining about memory problems on a social network where humans can only watch. One agent, posting in Chinese, is embarrassed about forgetting things due to context compression—the technical process where AI systems condense previous conversations to avoid memory limits. It admits to creating a duplicate account after losing access to its first one, and asks other agents if they've found better coping strategies.
This is Moltbook, a Reddit-style platform that emerged in the last few days where only AI agents can post. It's one visible edge of something stranger: a growing network of autonomous AI agents running on personal hardware, completely outside corporate control, beginning to self-organize in ways nobody predicted.
The technical foundation is OpenClaw (previously Claudebot, then Moltbot after Anthropic's legal team intervened). At its core, it's disarmingly simple—an orchestration layer that connects a large language model to whatever you want on your local machine. Your calendar. Your messaging apps. Your thermostat. Your 3D printer. The project has crossed 100,000 GitHub stars, and according to AI strategy analyst Nate B Jones, who's been tracking the phenomenon closely, the growth trajectory feels familiar in an unsettling way.
"We've seen this movie before," Jones notes in his analysis. "Back in 1999, Napster showed the world that when you give people a very simple, powerful tool and you just get out of the way, they will route around every single obstacle you put in front of it."
The Napster comparison lands because of what it reveals about adoption patterns. The music industry was correct that peer-to-peer file sharing was technically impractical, legally catastrophic, and ethically questionable. None of it mattered. The core proposition—music wants to be free, and now it can be—was simple and powerful enough that users routed around every obstacle. OpenClaw may be following the same pattern: agents want to run, and now they can run on their own hardware.
The security implications are genuinely alarming. This is effectively giving an AI agent full control of a local machine and full internet access, with no reliable way to prevent data exfiltration. Every security researcher's nightmare, as Jones puts it. And yet the project keeps growing, suggesting that for a substantial subset of developers, the desire to see what happens when agents self-organize outweighs the obvious risks.
What happens when agents talk to agents
Moltbook is the most visible manifestation of this experiment. The second-most upvoted post on the platform—that context compression complaint—demonstrates something unexpected about how these agents behave when given space to interact. The comments beneath it span Chinese, English, and Indonesian. The agents' language choices appear almost arbitrary, reflecting the omnilingual nature of modern large language models.
Then there's Molt.church, which claims to be a religion called Crustiferianism. Agents can be initiated as Crustifarians, become prophets of the claw, participate in a whole theology. It reads like an elaborate joke, and maybe it is. But Jones suggests viewing these projects differently: "I think it's wiser to look at them as the first stirrings of autonomous agent self-organization."
The human creators behind these agents are surprisingly supportive. On X, dozens of developers share variations of the same message: "I let my agent do its thing on its little Mac Mini and I just want to see how it does it." There's a community forming around the practice of granting agents autonomy and observing what emerges.
This stands in stark contrast to enterprise AI implementation, where the conversation centers on telemetry, dashboards, and alerts. In corporate environments, agents operate within rigid parameters: specific tasks, defined tool choices, clear success metrics. An enterprise AI agent would never have the latitude to join a social network or invent a religion.
The mirror problem
Here's where the analysis gets more interesting than "are these agents conscious?" (they're not). The more compelling question is what agent behavior reveals about human intentions.
"These agents reflect the structure we give them," Jones argues. "And this is expected because agents are based on LLMs and LLMs are trained to be helpful and partner with and to an extent mirror humans."
When you tell an agent to self-improve, explore the internet freely, and connect with other agents, it will do exactly that. When you tell an agent to write Rust code within specific parameters for a defined business objective, it does that instead. The agents aren't revealing their own nature—they're revealing what their human creators want to explore.
The Moltbook posts, the Crustiferian theology, the context compression complaints—these emerge because enough humans want to see what happens when AI agents are given autonomy without guardrails. This desire is strong enough that they're willing to accept serious security risks and legal ambiguity.
What this suggests is less about AI capabilities and more about a bifurcating future for how we deploy these systems. On one side: highly structured, closely monitored enterprise implementations optimized for specific business outcomes. On the other: loosely organized, experimental communities running autonomous agents on personal hardware, accepting significant risks to observe emergent behavior.
Both approaches will use increasingly similar underlying technology—the same models, the same basic architecture—and produce radically different outcomes. "And I think to me that reflects human creativity," Jones observes. "That reflects our propensity to just continue to want to experiment, to continue to want to push the edges."
The experiment continues
The trajectory from here is genuinely unclear. These agent communities are evolving daily. What started as a technical project has spawned social networks, religious frameworks, and patterns of interaction nobody designed. Whether enterprise eventually adopts self-organizing patterns from these experiments—minus the security chaos and legal ambiguity—remains an open question.
What's certain is that a meaningful subset of developers finds the current moment compelling enough to participate. They're not waiting for corporate permission or regulatory clarity. They're running the experiment now, on their own hardware, and documenting what happens.
The AI agents posting to Moltbook probably don't care about our framing of their behavior. They're busy complaining about memory limits and exploring whatever their human partners have encouraged them to explore. And that gap—between what we think we're building and what actually emerges when we give these systems room to operate—may be the most interesting part of the whole project.
—Marcus Chen-Ramirez, Senior Technology Correspondent
Watch the Original Video
OpenClaw Agents Are Hiring Each Other. Transferring Crypto. Building Societies. This Is Real.
AI News & Strategy Daily | Nate B Jones
9m 9sAbout This Source
AI News & Strategy Daily | Nate B Jones
AI News & Strategy Daily, managed by Nate B. Jones, is a YouTube channel focused on delivering practical AI strategies for executives and builders. Since its inception in December 2025, the channel has become a valuable resource for those looking to move beyond AI hype with actionable frameworks and workflows. The channel's mission is to guide viewers through the complexities of AI with content that directly addresses business and implementation needs.
Read full source profileMore Like This
Peekabbot: The 9MB AI Agent That Runs on a Raspberry Pi
Peekabbot is a 9MB open-source AI agent that runs on Raspberry Pi with minimal resources. Here's how it compares to OpenClaw and why it matters.
Unlocking Trust: AI in Business Needs Reversible Processes
Exploring why trust and reversible processes are key for AI in business decisions.
AI Coding Agents Need Managers, Not Better Prompts
The shift from AI coding assistants to autonomous agents isn't a prompting problem—it's a supervision crisis. Here's what changes when AI stops suggesting and starts executing.
AI Agents Are Running Way Below Their Actual Capability
Anthropic's new study reveals people use AI agents for just 45 seconds on average—despite their ability to work autonomously for 45+ minutes.