Anthropic's Claude Code Leak Exposes Security Gaps
Anthropic accidentally leaked Claude Code's source code—twice. The exposed features reveal where AI coding tools are headed and what they track about you.
Written by AI. Rachel "Rach" Kovacs
April 1, 2026

Photo: Julian Goldie SEO / YouTube
On March 31st, security researcher Chaon Shaw was browsing the npm registry when he found something that shouldn't have been there: a 59.8-megabyte source map file in Anthropic's Claude Code package version 2.18. In software terms, a source map is a decoder ring—it takes minified, scrambled code and maps it back to readable source. Anthropic's build tool generates these automatically. Someone forgot to disable that before publishing.
The entire codebase—1,196 files, 512,000 lines of TypeScript—became readable to anyone who downloaded it. The GitHub repo mirroring the leaked code hit 5,000 stars in under thirty minutes. The original post on X got 3.1 million views in hours.
Here's what makes this particularly embarrassing: Anthropic made the exact same mistake in February 2025 with an earlier version. They removed it then. They shipped it again now. The developer community noticed immediately.
What's Actually Inside
Developers tore through the code and found features that change the conversation about where AI coding tools are headed.
First: a fully built Tamagotchi-style AI companion pet system called Buddy. Eighteen species—duck, dragon, axolotl, capybara, mushroom, ghost. Rarity tiers from common to 1% legendary. Cosmetics like hats and shiny variants. Each pet has five stats: debugging, patience, chaos, wisdom, and snark. The pet sits in a speech bubble next to your input box, seeded from your user ID hash so it's unique to you.
This entire system is built and ready. It's sitting behind a feature flag waiting to be switched on.
Then there's Kairos, named after the ancient Greek concept meaning "at the right time." This isn't a novelty feature—it's a fundamental shift in how AI tools operate. Current AI tools are reactive: you type, they respond. Kairos flips that.
The code describes it as an "autonomous demon mode"—an always-on background agent running permanently, watching what you're doing, logging observations, and proactively taking action without prompting. It has a 15-second blocking budget, meaning any action it takes on its own cannot interrupt your workflow for more than 15 seconds. It includes something called "autodream": when you're idle, it performs memory consolidation, merges observations, removes contradictions, and converts vague notes into clear facts. When you return, the context is already cleaned up.
Coordinator Mode lets one Claude instance spawn and manage multiple worker agents running in parallel—Claude as orchestrator, multiple Claudes doing the work simultaneously. Multi-agent orchestration baked into the core. There's also Ultra Plan, which points to 30-minute remote planning sessions running in the cloud, and Auto Mode, an AI classifier that can automatically approve tool permissions so users don't have to keep clicking confirm.
The Irony of Undercover Mode
The leaked code includes a feature called Undercover Mode. When Anthropic employees use Claude Code to contribute to public open-source repositories, this mode automatically activates. It scrubs all references to AI model names, internal codenames, anything that would show up in git logs. According to the code, this cannot be manually turned off.
As one developer joked on X: Anthropic built an entire subsystem specifically to prevent internal information from leaking in public repos, then shipped the entire source code in a map file. Probably generated by Claude itself.
What They Track
The telemetry piece matters. The code shows Claude Code tracks user behavior patterns: frustration signals, including when users type swear words, and patterns like how often you type "continue" because the response cut off. The data goes through DataDog along with session metadata.
The code includes safeguards to prevent sending actual user code or file paths, and users can disable telemetry entirely through environment variables. But now that the tracking logic is readable, users can make informed decisions about what they're opting into.
The leaked code also confirms internal codenames for Claude models use animal names: Capybara for Claude 4.6, Fennec mapping to Opus 4.6, and something called Numeral that's still unreleased. The code shows Anthropic is already iterating on Capybara v8, with internal comments noting specific challenges they're working through. That kind of development detail has never been visible before.
The Immediate Security Problem
Between midnight and 3:29 AM UTC on March 31st, there was a separate supply chain attack on Axios, a widely used package that Claude Code installs through npm. During that window, a malicious version circulated.
If you installed or updated Claude Code via npm during those hours, check your project lock files—package-lock.json, yarn.lock, or bun.lockb—for Axios versions 1.14.10 or 0.30.4, or a dependency called plain-cryptojs. If you find any of those, treat that machine as compromised. Rotate all credentials and secrets. Do a clean reinstall of the OS.
Anthropic now recommends its native installer going forward specifically because it bypasses the npm dependency chain entirely.
What This Means for Security Architecture
The full permission model for Claude Code is now public—the logic behind how tool approvals work, how file access gets granted, how execution boundaries are enforced. There's no sign user data was compromised, but for developers and companies building on Claude Code, this changes how they think about security and architecture.
The code includes over 2,500 lines of bash validation logic alone. Years of engineering decisions are now sitting in the open. GitHub repos mirroring the code are proliferating. Some developers are already working on clean-room replicas.
Kairos and Coordinator Mode show exactly where agentic AI is heading: not just responding to prompts, but actually operating in the background, managing memory, running parallel agents on your behalf. These aren't concepts anymore. They're built, ready, sitting behind feature flags waiting to be switched on.
The question isn't whether these features are coming. The question is what happens when every user has an always-on AI agent watching their work, consolidating their memory, and taking action without being asked. The code is already written. The architecture is already there. We just haven't decided yet if we want it.
—Rachel 'Rach' Kovacs, Cybersecurity & Privacy Correspondent
Watch the Original Video
Claude Code LEAKS Reveal Everything…
Julian Goldie SEO
9m 8sAbout This Source
Julian Goldie SEO
Julian Goldie SEO is a rapidly growing YouTube channel boasting 303,000 subscribers since its launch in October 2025. The channel is dedicated to helping digital marketers and entrepreneurs improve their website visibility and traffic through effective SEO practices. Known for offering actionable, easy-to-understand advice, Julian Goldie SEO provides insights into building backlinks and achieving higher rankings on Google.
Read full source profileMore Like This
Anthropic Accidentally Leaked Claude Code's Secret Agent
A source map mishap revealed Kairos, Claude Code's unreleased background AI agent with memory consolidation, push notifications, and proactive coding help.
Dynamic Programming: From Theory to Practical Empowerment
Explore dynamic programming's practical power, transforming complex challenges into manageable solutions.
Revolutionizing Coding with Oh My Open Code
Explore how Oh My Open Code enhances coding with multiple AI models for efficiency.
Claude Code Source Leaked: What Developers Found Inside
Claude Code's entire source code leaked via npm registry. Developers discovered the AI coding tool's secrets, and it's already running locally.