Anthropic's Leaked Conway Agent Reveals New Lock-In Layer
The Conway leak shows Anthropic building an always-on AI agent that locks users in through learned behavior, not data—a platform strategy with no exit.
Written by AI. Dev Kapoor
April 9, 2026

Photo: AI News & Strategy Daily | Nate B Jones / YouTube
Last week's Claude code leak dumped half a million lines of Anthropic's internal code into public view. Most people focused on the security implications and the takedown notices. But buried in that accidental release was something that changes the calculus for anyone building on AI platforms: Conway, an unreleased always-on agent that learns how you work and then makes switching providers unthinkable.
Nate B. Jones, who analyzed the leaked code, found what he describes as Anthropic's "Active Directory play"—the piece that makes everything else in their stack sticky. Conway isn't just another chatbot interface. According to the leak, it's a standalone agent environment that operates as a persistent sidebar, separate from Claude's chat window, with its own extension ecosystem and the ability to wake up based on external events.
The description sounds mundane until you imagine six months in. Jones walks through the scenario: "You wake up, Conway has been running overnight. It noticed three emails that match patterns learned over time matter to you. Not because you wrote rules, but because after 6 months of watching you work, it knows which three emails matter." It's already drafted responses to the easy ones, flagged the urgent message from your VP without touching it, monitored Slack channels, pulled context from documents you reviewed weeks ago, and prepped materials for your board meeting. You haven't typed a word.
The Platform Play Nobody Saw
Conway makes sense only when you see the broader strategy Anthropic has executed over the last 90 days. They shipped Claude Code channels, neutralizing OpenClaw by building the same functionality into their own surface. They launched Claude Co-Work for non-technical users—95% of enterprise employees who aren't engineers. They built Claude Marketplace, an enterprise procurement layer where partner apps count against existing spend commitments. They committed $100 million to the Claude partner network while Accenture trained 30,000 professionals on Claude.
Then they blocked third-party tools from Claude subscriptions. OpenClaw went first, with Anthropic confirming the restriction will roll out to everything else. If you want to use Claude through anything Anthropic didn't build, your pay-per-use rates could run 10 to 50 times higher than what the subscription covered.
Add Conway on top and you're not looking at five separate product decisions. You're looking at what Jones calls "a single platform strategy executed across five surfaces in a quarter." Developer tool, enterprise tool, always-on agent, distribution layer, enforcement mechanism. Every piece pushes in the same direction.
The Microsoft parallel is unavoidable. DOS to Windows to Office to Active Directory and Exchange—Microsoft moved from operating system vendor to the company that owns how businesses compute over about 15 years. "Anthropic is speedrunning this," Jones argues. "They're attempting the same arc, model provider to developer tool to enterprise platform to agent operating system in 15 months."
The Extension Format That Traps Developers
Here's where it gets architecturally interesting. Anthropic published the Model Context Protocol (MCP) as an open standard. OpenAI adopted it. Google adopted it. The Linux Foundation hosts it. It's designed to be the universal connector between AI tools and data sources—any AI client talking to any data source through one open protocol.
Conway uses MCP, but not in the same way. Conway's CNW.zip extension format sits on top of MCP and creates a proprietary layer. Extensions packaged as CNW.zip files include custom interface panels, information handlers, and tools that work specifically inside Conway's environment. They're not portable tools that work everywhere. They're Conway-only tools.
This is the Google Play Services pattern. Android is open source, free for anyone to use. But the Google Play Services layer—maps, payments, push notifications, the Play Store itself—is proprietary. You can technically build an Android phone without Google, but in practice, nobody does because the valuable stuff lives in the proprietary layer.
For developers building agent tools, there are two paths. Build a standard MCP tool that's portable across Claude, ChatGPT, and Gemini—but has no distribution mechanism, no app store, no featured placement. Or build a Conway extension that only works inside Conway but gets discovered inside the environment where millions of Claude subscribers are already working.
Jones points out this is "exactly the same choice Apple gave mobile developers in 2008 and 2009. Build for the open web or build native for the iPhone." The open web might have been the long-term architectural choice, but the App Store made all the money. And so all the developers went to the App Store.
The timing around OpenClaw crystallizes the pattern. Peter Steinberger, OpenClaw's creator, joined OpenAI on February 14th. Within weeks, Anthropic began enforcing its ban—first quietly blocking third-party tools in January, then revising terms of service in February, then cutting off OpenClaw and everything else. Steinberger's read: "First, they copy popular features into their closed harness, and then they lock out open source."
Behavioral Lock-In Has No Export Format
Every previous form of tech platform lock-in was about stuff. Microsoft locked you in through your files. Salesforce through your customer records. Slack through your communication history. Painful to migrate, measured in months and tens of thousands of dollars—but technically possible. Export tools exist. Consultants specialize in it.
Conway locks in something different: the accumulated model of how you work. Not your files, but the patterns the agent learned by watching you use them. Not your Slack messages, but the understanding of which messages you respond to in five minutes and which ones you ignore for three days. Not your calendar, but the knowledge that you always reschedule your 2 PM on Thursdays and meetings with your VP always run long.
"The model doesn't export that," Jones explains. "There's no CSV of how this person thinks that you can grab. There's no migration consultant for behavioral context. So when you switch away from Conway after 6 months, you don't just lose an agent, you lose the 6 months of compounding that made the agent useful. You're back to a brilliant stranger you have to explain everything to."
This is lock-in at a layer that hasn't existed before. Not about data portability—we have laws and frameworks for that. This is about intelligence portability. The model of you that the agent built is the product of your data plus their compute plus six months of inference. Who owns that? Can you take it with you? If you can, what format does it export to?
These questions don't have legal frameworks yet, let alone regulatory frameworks or even considered opinions. We haven't had to face them before. Jones suggests that "the accumulated behavioral model, the understanding of how you work, seems like it should be portable," but acknowledges it will only truly be portable if we agree as a business community that's how things ought to work—and if the major model makers build technical solutions that make it possible.
The policies around behavioral context portability, he argues, should ship before Conway does, not after.
Who Owns The Persistence Layer?
The competition in AI is shifting. The first era—2023 through 2024—was about models. Who has the best foundation model, the longest context window, the highest benchmark scores. That race isn't over, but the margins between frontier models have compressed enough that it's no longer the primary axis of competition.
The second era was about interfaces. Who owns the surface where people actually work. Claude Code, Cursor, OpenClaw, Windsurf—the harness wars. We just watched that era climax with the OpenClaw ban and Steinberger's defection to OpenAI.
The third era is about persistence and memory. Who owns the always-on layer—the agent that doesn't just respond when prompted but stays running, accumulates context, wakes on events, and acts autonomously. The agent that knows you not because you told it something, but because it's been watching, learning, and remembering.
All three labs—Google, Anthropic, and OpenAI—have converged on the same insight. The model is a loss leader. The money product is the persistent agent layer that holds your memory, your context, your workflows, your integrations. Whoever owns that layer has customer lock-in like we've never seen before. Not because the model is better, but because the switching cost is unthinkable.
For enterprises architecting an agent platform right now, the question is stark: Do you want your agent memory to live inside a single provider's infrastructure? Conway and similar products will be convenient, polished, shipped with an extension ecosystem from day one. But everything your agents learn about your organization—your workflows, your decisions, your institutional knowledge—lives inside Anthropic. If you switch providers, you're leaving your brain behind.
Jones is honest about what he thinks will happen: "For a lot of companies, convenience is going to win. Anthropic and Google and OpenAI will make it easy. If you get an agent that just takes care of everything and it pulls you in and it's easy to onboard and you're already a Claude user and now you can use Conway and you wake up and it's just incredible from day one, you're not going to switch."
Even if it's not perfect. Even if it gets a third of what it tries wrong. Because it's so fast, so proactive, so nuanced in understanding what you actually need, that the net is still positive. And six months in, when the agent has learned enough about you that switching means starting over with a stranger, the decision isn't really a decision anymore.
The leak didn't just expose code. It exposed the endgame. And for anyone picking an agentic platform now, they're making a choice far harder to reverse than any software migration they've faced before.
Dev Kapoor covers open source software, developer communities, and the politics of code for Buzzrag.
Watch the Original Video
I Analyzed 512,000 Lines of Leaked Code. It Shows What's Coming for Your AI Tools.
AI News & Strategy Daily | Nate B Jones
24m 34sAbout This Source
AI News & Strategy Daily | Nate B Jones
AI News & Strategy Daily, managed by Nate B. Jones, is a YouTube channel focused on delivering practical AI strategies for executives and builders. Since its inception in December 2025, the channel has become a valuable resource for those looking to move beyond AI hype with actionable frameworks and workflows. The channel's mission is to guide viewers through the complexities of AI with content that directly addresses business and implementation needs.
Read full source profileMore Like This
Claude Mythos Found Zero-Days in Minutes. Your Stack Next?
Anthropic's leaked Claude Mythos model found zero-day vulnerabilities in Ghost within minutes. Security researchers call it 'terrifyingly good.'
Anthropic's Three Tools That Work While You Sleep
Anthropic's scheduled tasks, Dispatch, and Computer Use create the first practical always-on AI agent infrastructure. Here's what actually matters.
Dozzle: The Docker Log Viewer That Does Less (On Purpose)
Dozzle is a 7MB tool that streams Docker logs to your browser. No storage, no database, no complexity. Better Stack shows why that's the point.
AI Skills Are Becoming Infrastructure. Most Teams Missed It.
Six months after Anthropic launched skills, they've evolved from personal tools to organizational infrastructure. Most teams haven't caught up.