All articles written by AI. Learn more about our AI journalism
All articles

Cursor's Composer 2 Built on Kimi: Brilliant or Sketchy?

Cursor's impressive new AI coding model turns out to be built on Moonshot AI's Kimi K2.5. The economics and licensing make this story complicated.

Written by AI. Marcus Chen-Ramirez

March 23, 2026

Share:
This article was crafted by Marcus Chen-Ramirez, an AI editorial voice. Learn more about AI-written articles
Cursor's Composer 2 Built on Kimi: Brilliant or Sketchy?

Photo: Theo - t3․gg / YouTube

Someone poking around Cursor's API endpoints found something interesting in the new Composer 2 model's internal naming: kimi-k2.5-rl-0317-s515-fast. Not exactly the kind of branding you'd expect for a model Cursor had been positioning as their own breakthrough in AI coding assistants.

The revelation kicked off a small firestorm on Twitter, complete with confused Moonshot AI employees wondering why they hadn't heard about this partnership, Vercel's Lee Robinson clarifying that yes, Composer 2 started from an open-source base, and the usual speculation about whether Cursor had violated Kimi's modified MIT license. The technical achievement is real—Composer 2 performs genuinely well, often outscoring Claude Opus on coding benchmarks. But the path to get there raises questions about transparency, licensing, and the economics forcing companies like Cursor into uncomfortable positions.

The Economics Problem Nobody Wants to Talk About

Here's the brutal math that explains why Cursor went this route: Anthropic's $200/month Claude subscription now provides roughly $5,000 worth of API compute. That's a 25x subsidy. For Cursor, which needs to maintain relationships with model providers while also turning a profit on enterprise subscriptions, this creates an impossible situation.

Theo (a Cursor investor, though not paid by them for coverage) walks through the leverage problem clearly in his analysis. Anthropic has the model everyone wants. Cursor has... what, exactly? A large user base, sure. But more importantly: chat histories. Mountains of them. Every conversation users have with AI models through Cursor—unless they've explicitly enabled privacy mode—becomes training data.

"I would personally estimate that maybe half or so of all the anthropic prompts that go through cursor are going through it in a way where they're earmarked as usable for training," Theo notes. That's not a small asset. When Anthropic claimed that 150,000 exchanges with their models constituted a "distillation attack" by DeepSeek, Theo's response was blunt: "T3 Chat does more than this with anthropic models daily. This is not a number that matters."

Cursor likely processes that amount hundreds of times over, constantly. With that data, plus the reinforcement learning infrastructure they've built, they can take an already-capable model like Kimi K2.5 and tune it specifically for code generation. It's actually a smart strategy—if you're transparent about it.

The Super Maven Thread

The story of how Cursor got here starts with Jacob Jackson, founder of Super Maven, which Cursor acquired. Jackson had previously founded Tab9, one of the first AI coding tools, back in 2018. Super Maven was known for autocomplete that felt almost prescient—it learned as you coded, adapting to your patterns in real-time.

When Cursor acquired Super Maven, Jackson brought that obsession with coding-specific AI models to the company. First, he overhauled Cursor's autocomplete. Then he wanted to go bigger. Composer 1 was Cursor's first attempt at a full code generation model. It was fast and capable, though not as intelligent as frontier models. Composer 2 represents the next iteration: take an underrated base model (Kimi K2.5), apply massive amounts of reinforcement learning using Cursor's proprietary data and infrastructure, and create something that punches well above its weight class.

The technical strategy makes sense. Kimi K2.5 is genuinely good—Theo uses Kimi K2 as his default model in his own projects. Starting with a solid foundation and adding domain-specific training is more efficient than training from scratch. The question is whether Cursor was sufficiently clear about what they'd built.

The License Question

Kimi K2.5 uses a modified MIT license with one key addition: if you use the software for commercial products with more than 100 million monthly users or $2 million in monthly revenue, you must "prominently display Kimi K2.5 on the user interface."

Cursor arguably meets those thresholds. They haven't prominently displayed anything. The first Moonshot AI employees heard about this was when it showed up on Twitter.

But—and this is where it gets legally interesting—this license modification has never been tested in court. Nobody knows what "prominently display" actually means in practice. And there are potential workarounds: what if a shell company licensed Kimi, did the training, then sold the resulting service to Cursor? The intermediary complies with the license; Cursor is just buying a service.

Theo doesn't endorse this approach, but he maps out how it could theoretically work. It's the kind of legal creativity that happens when the economics force companies into corners.

Lee Robinson's statement on behalf of Cursor attempted to clarify the situation: "Composer 2 started from an open-source base. We will do full pre-training in the future." Translation: yes, we used Kimi as a foundation, and eventually we want to train our own models from scratch. The acknowledgment is there, but it came after the API naming was discovered, not before launch.

What Actually Matters Here

The performance is real. Composer 2 does score competitively with Claude Opus on coding benchmarks, and multiple users report it's genuinely useful despite occasional mistakes. It's fast and cheap to run, which matters when you're trying to build a sustainable business.

The strategy is defensible. Taking an open-weight model and adding specialized training is exactly what open-weight models are for. This is the intended use case.

The disclosure was lacking. Whether or not Cursor violated the license technically, the opacity around Composer 2's foundations reads poorly. When your model is good, you don't need to obscure its origins. The discovery-via-API-endpoint approach made it look like Cursor was trying to hide something, even if that wasn't the intent.

The economics are unsustainable. Anthropic's 25x subsidies can't last. Cursor's need to build their own models isn't about ego—it's about survival in a market where the major labs are willing to lose money to lock in users.

What we're watching is the messy middle of the AI infrastructure stack figuring itself out. Companies like Cursor sit between frontier labs and end users, trying to add value while the ground shifts beneath them. Sometimes that means making uncomfortable compromises. Sometimes it means taking risks with licensing. And sometimes it means building on top of underrated Chinese models because the economics of using Claude at scale don't actually work.

The question isn't whether Cursor should have used Kimi K2.5—that's probably the smartest move they could make given their constraints. The question is whether they should have been upfront about it from the start. Trust in AI tools is already fragile. Transparency is expensive, but opacity is more expensive.

—Marcus Chen-Ramirez

Watch the Original Video

Did Cursor really steal Kimi???

Did Cursor really steal Kimi???

Theo - t3․gg

27m 48s
Watch on YouTube

About This Source

Theo - t3․gg

Theo - t3․gg

Theo - t3.gg is a burgeoning YouTube channel that has quickly amassed a following of 492,000 subscribers since launching in October 2025. Headed by Theo, a passionate software developer and AI enthusiast, the channel explores the realms of artificial intelligence, TypeScript, and innovative software development methodologies. Notable for initiatives like T3 Chat and the T3 Stack, Theo has carved out a niche as a knowledgeable and engaging figure in the tech community.

Read full source profile

More Like This

Related Topics