Cursor's Composer 2 Drama: What Really Powers the Model
Cursor's impressive new Composer 2 model turns out to be built on Moonshot AI's Kimi—raising questions about disclosure, licensing, and transparency.
Written by AI. Bob Reynolds
March 29, 2026

Photo: Theo - t3․gg / YouTube
Cursor announced Composer 2 with the kind of enthusiasm usually reserved for genuine breakthroughs. The new model performed remarkably well, beating Claude Opus on certain tasks while running cheaper. For developers already invested in Cursor's ecosystem, this looked like validation—proof that the company could compete at the model level, not just the interface level.
Then someone looked at the API endpoints.
The model identifier buried in Cursor's infrastructure told a different story: kimi-k2.5-rl-0317-s515-fast. Not a Cursor original. Moonshot AI's Kimi, an open-weight model from a Chinese AI company, with what appears to be some reinforcement learning applied on top.
The discovery, shared on social media by developers poking around Cursor's implementation, triggered exactly the kind of drama you'd expect when a company's positioning doesn't match its engineering reality. Moonshot AI employees posted—then deleted—confused responses. The developer community oscillated between feeling misled and arguing over whether this even mattered.
I've covered enough product launches to recognize the pattern. The question isn't whether Cursor built something useful—they clearly did. The question is what they actually built versus what they implied they built, and whether the gap between those two things matters.
What Cursor Actually Shipped
Composer 2 exists only within Cursor. There's no API. You can't benchmark it independently. You can't use it in other tools. This closed approach isn't unusual for companies trying to maintain competitive moats, but it makes independent verification difficult.
Theo, the developer who covered this story in his livestream, put it plainly: "As much as I love this model, it's really hard for me to evaluate because I can't throw it in benchmarks because there's no API. I can't use it in other tools because there's no API."
What we know: Cursor took Kimi k2.5, applied additional training (the "RL" in that identifier suggests reinforcement learning), and integrated it deeply into their editor. What we don't know: how much of the performance comes from the base model versus Cursor's modifications, and whether those modifications required permission under Kimi's license terms.
Moonshot AI released Kimi as an open-weight model, but "open-weight" doesn't mean "do whatever you want." Different licenses have different requirements around disclosure, attribution, and commercial use. Some deleted tweets from Moonshot employees suggested confusion about whether Cursor had violated those terms, though nothing definitive emerged.
The Disclosure Problem
Cursor's initial announcement didn't mention Kimi. The marketing suggested a proprietary achievement—"our new model"—in the way companies typically frame genuine innovations. Only after developers discovered the Kimi foundation did Cursor acknowledge the relationship.
This matters less for what Cursor did than for what it suggests about how AI companies will navigate the increasingly blurry line between building and assembling. Every major AI product involves some combination of base models, fine-tuning, prompt engineering, and interface design. The question is where credit belongs and what qualifies as disclosure.
Companies have legitimate reasons to keep implementation details private. Competitive advantage. Trade secrets. Preventing replication. But there's a difference between protecting your methods and implying capabilities you borrowed.
The developer community—Cursor's actual customers—seems split. Some argue the source doesn't matter if the product works well. Others point out that understanding what you're using matters for risk assessment, vendor lock-in decisions, and simple honesty.
Why This Pattern Will Repeat
The foundation model landscape changes weekly. Open-weight releases like Kimi make sophisticated capabilities available to anyone with engineering resources. The barrier to "launching a new model" has dropped to the point where the distinction between original research and clever integration has become genuinely hard to parse.
More companies will face versions of Cursor's dilemma. When you build on someone else's model but add meaningful value through training, infrastructure, or integration, how do you describe what you've made? "We fine-tuned Kimi" undersells the work. "Our new model" oversells the originality. The honest middle ground—"We built on Kimi and added X, Y, and Z"—requires explaining technical details that marketing departments hate.
I sympathize with the position even as I recognize the problem. Cursor did engineering work. They made Kimi more useful for their specific use case. They deserve credit for that. But they also leveraged Moonshot's years of research and billions of training tokens. Moonshot deserves acknowledgment for that.
The resolution here probably involves Cursor clarifying their relationship with Kimi, possibly updating their terms or attribution, and the community moving on to the next controversy. What lingers is the precedent. As more companies build on open-weight models, we'll need clearer norms around disclosure. Not legal requirements necessarily—though those may come—but shared expectations about honesty.
What Developers Actually Care About
Most developers I know care less about who built what than about whether the tool works and whether they can trust the company behind it. Composer 2 apparently works well. The trust question is harder.
Cursor already operates in a complicated position. The company has raised significant venture funding. They're an investor darling in the AI tooling space. That success creates pressure to demonstrate continued innovation, which creates incentives to frame incremental improvements as breakthroughs.
Theo disclosed his own investment in Cursor during his coverage—a detail worth noting because it adds complexity to his analysis without invalidating it. He's simultaneously a user, an investor, and someone with enough technical background to understand what's actually happening. That combination of perspectives matters more than the financial stake.
The fundamental tension won't resolve cleanly. Companies will continue building on others' work while trying to differentiate their contributions. Developers will continue trying to understand what they're actually using. The gap between those two imperatives will generate friction.
What changes is whether we develop better vocabulary for describing these hybrid creations. "Based on," "built with," "powered by"—the prepositions matter. They signal different relationships between creator and creation, different allocations of credit and responsibility.
Cursor shipped something useful. They built it on someone else's foundation. Both things are true, and the second doesn't diminish the first. But pretending the foundation doesn't exist doesn't work once someone checks the API endpoints.
—Bob Reynolds, Senior Technology Correspondent
Watch the Original Video
Fine, I'll talk about the cursor drama
Theo - t3․gg
P0DAbout This Source
Theo - t3․gg
Theo - t3.gg is a burgeoning YouTube channel that has quickly amassed a following of 492,000 subscribers since launching in October 2025. Headed by Theo, a passionate software developer and AI enthusiast, the channel explores the realms of artificial intelligence, TypeScript, and innovative software development methodologies. Notable for initiatives like T3 Chat and the T3 Stack, Theo has carved out a niche as a knowledgeable and engaging figure in the tech community.
Read full source profileMore Like This
Cursor's Composer 2 Built on Kimi: Brilliant or Sketchy?
Cursor's impressive new AI coding model turns out to be built on Moonshot AI's Kimi K2.5. The economics and licensing make this story complicated.
JavaScript's Bloat Problem Is Worse Than You Think
Why your web app downloads millions of lines of unnecessary JavaScript—and who's responsible for the mess we're in.
How Node.js Cut Memory Usage in Half With One Change
A year-long collaboration between Cloudflare and the V8 team enabled pointer compression in Node.js, halving memory usage with minimal performance cost.
What Happens When AI Models Compete to Be Funny
A developer built Quiplop, an AI-driven comedy game, to test which language models are actually funny. The results reveal unexpected truths about AI.