TL;DR: On March 31, 2026, Anthropic accidentally published the entire internal source code of Claude Code, their flagship AI coding tool, through a simple packaging mistake. A researcher found it, posted it on Twitter, and within hours the entire developer internet descended on it. Anthropic tried to take it down. It didn’t work. Here’s everything you need to know.
At 3:23 AM on March 31, 2026, a researcher named Chaofan Shou posted five words on X that sent shockwaves through the developer world: “Claude code source code has been leaked.”
That post now has 4.5 million views.
By the time most people woke up that morning, the damage was done. One of the most closely guarded codebases in AI was live on the internet, being mirrored, cloned, rebuilt, and studied by thousands of developers in real time. And Anthropic, despite being one of the best-funded AI companies on the planet, couldn’t stop it.
How a Single Missing Line Exposed Anthropic’s Secrets
Here’s the wild part. Nobody hacked Anthropic. There was no sophisticated attack, no insider threat, no breach. It was one missing line in a config file.
Think of it like this. Imagine you write a book, and you accidentally send the publisher not just the final manuscript, but all your personal notes, outlines, draft chapters, and margin scribbles. That’s essentially what happened here.
When developers build software, they often create a file called a “source map.” It’s like a decoder ring for code. It takes the final compressed, machine-efficient code and maps it back to the original human-readable version. These files exist for debugging. They’re supposed to stay private. They should never ship with the finished product.
Anthropic pushed version 2.1.88 of Claude Code to npm, a public registry where developers download software packages, and accidentally included a 59.8 MB source map file. That file pointed directly to a zip archive sitting on Anthropic’s own cloud storage. Completely open. No password. No access controls.
Shou found the file, downloaded the zip, and posted the link. Within hours, the ~512,000-line TypeScript codebase across nearly 1,900 files was being mirrored everywhere.
Anthropic confirmed it the same day: “This was a release packaging issue caused by human error, not a security breach.”
This is actually the second time this has happened. A nearly identical mistake occurred with an earlier version of Claude Code in February 2025. Same error. Different day.
What Was Actually Leaked (And What Wasn’t)
Let’s be clear about what people got their hands on, because this matters.
What was NOT leaked: The actual AI model. The “brain” of Claude, the thing that makes it smart, is called model weights. Those were not exposed. Your personal conversations, data, or account credentials were also not exposed.
What WAS leaked: The “harness.” Think of it like this. Claude the AI is the engine. Claude Code is the car built around that engine. The leaked code is the entire blueprint for the car, including parts of it that haven’t been shown to the public yet.
Specifically, people now have access to:
- The complete internal architecture of how Claude Code works
- 44 hidden feature flags for capabilities that are fully built but not yet released
- Internal model codenames and performance data Anthropic never meant to share
- The system prompts that tell Claude how to behave inside Claude Code
- Details on Anthropic’s product roadmap for the next several months
That last one is what really stings for Anthropic.
The Feature Flags: Anthropic’s Secret Roadmap Just Got Exposed
Feature flags are like light switches in software. They let a company build a feature completely, then flip a switch to turn it on for users when they’re ready. The leaked code contained 44 of them for features nobody knew existed.
Here’s what was hiding behind those switches:
KAIROS (the biggest one): The name comes from Ancient Greek, meaning “at the right time.” This is an always-on background version of Claude Code that keeps working even when you step away from your computer. Imagine Claude quietly reviewing your code, consolidating its memory of your project, and preparing for your next session while you sleep. It even has a “nightly dream” routine that tidies up its memory overnight. This isn’t a concept. It’s built. It’s just not turned on yet.
Multi-Agent Coordinator Mode: Right now Claude Code is one AI working on your project. This feature lets one “boss” Claude spawn and manage a team of specialized worker Claudes, each with their own set of tools. One handles research, one handles writing code, one handles testing. It’s like going from a solo freelancer to an entire agency.
Voice Mode: Exactly what it sounds like. Talk to Claude Code out loud instead of typing.
Browser Control via Playwright: Instead of just reading web pages, Claude Code would be able to actually control a browser. Click buttons, fill out forms, navigate pages. Like hiring an assistant who can also use a computer.
Persistent Memory: Claude Code would remember your projects, preferences, and history across sessions without you having to re-explain everything every time.
Anti-Distillation (this one’s spicy): There’s a flag called ANTI_DISTILLATION_CC. When active, it secretly injects fake, made-up tool definitions into the system. The goal is to pollute the data of anyone trying to record Claude Code’s outputs to train a competing AI model. If you’re snooping on the data, you get garbage. Whether you see that as smart defensive engineering or something more questionable depends on how you look at it.
Undercover Mode: Claude Code can strip all traces that it’s an AI when Anthropic engineers use it on public open-source projects. No mention of Claude, no Anthropic branding, no internal codenames. Commits and pull requests look like they came from a human. This is the one that got the most debate in the developer community.
Frustration Detection: The code includes a list of words and phrases it watches for to detect when a user is getting angry or frustrated. Think: strong language, complaints, “this is broken.” When triggered, Claude presumably adjusts its approach. The funny part? Anthropic used a simple pattern-matching technique called a regex to detect emotions instead of AI. Classic.
Tamagotchi Buddy (April Fools?): Every user gets a little virtual creature in their terminal based on their user ID. There are 18 species, ranging from common to 1% legendary shiny variants, with stats like DEBUGGING, WISDOM, CHAOS, and SNARK. It was almost certainly planned as an April 1st reveal. The leak beat them to it by about 18 hours.
What Competitors Can Do With This
This is the part that matters most for Anthropic’s business.
Claude Code is one of the hottest developer tools in the AI space. It’s generating over $2.5 billion in annual revenue. What makes it valuable isn’t just that Claude is smart. It’s the years of engineering and product thinking baked into how that AI is packaged, prompted, and deployed.
Every competitor, including Google, OpenAI, Microsoft, and dozens of startups, now has the full blueprint. They know exactly which features are close to shipping. They know where the performance problems are. They know the specific weaknesses Anthropic’s own engineers have flagged internally.
One internal note revealed that the latest version of their upcoming Capybara model has a 29-30% false claim rate, a step backwards from earlier versions. That’s not the kind of number you want your competitors to know.
The key insight from the developer community: the code itself can be rewritten. The strategic surprise cannot be un-leaked.
Anthropic Tried to Take It Down. It Didn’t Work.
Anthropic moved fast. They pulled the vulnerable npm package and started filing DMCA takedown notices against GitHub repositories hosting the leaked code. GitHub complied quickly. Repositories went dark.
Then things got interesting.
A South Korean developer named Sigrid Jin, who had been recently featured by the Wall Street Journal for consuming 25 billion Claude Code tokens, woke up at 4 AM to the news. Instead of going back to sleep, he sat down and rewrote the core architecture from scratch in Python using an AI tool called oh-my-codex. He called the project “claw-code” and pushed it to GitHub before sunrise.
The legal logic here is clever. A DMCA takedown protects against copying someone’s exact code. A ground-up rewrite in a different programming language is arguably a new creative work. Anthropic faces a tricky problem: if they aggressively pursue copyright claims over AI-assisted rewrites, they potentially undermine their own legal arguments in cases where they’ve trained Claude on copyrighted data.
The repo hit 50,000 GitHub stars in two hours. That is reportedly the fastest any repository in GitHub’s history has reached that milestone. It now has over 55,800 stars and 58,200 forks.
A Rust port followed.
And then there’s the decentralized angle. You found the code yourself earlier in this story. A platform called Gitlawb, a decentralized git host that operates outside the reach of traditional copyright enforcement, posted the original TypeScript code with one message: “Will never be taken down.”
DMCA works on centralized platforms. It has far less power over decentralized infrastructure. The code is now effectively permanent.
What This Means for Anthropic Going Forward
Anthropic has had a rough week. This source code leak came just days after Fortune reported on a separate incident where nearly 3,000 internal files were made public, including a draft blog post about an upcoming model called Mythos.
Two major leaks in one week is a bad look for any company. It’s an especially bad look for a company whose entire brand is built around being the careful, safety-first AI lab.
The immediate damage is reputational and competitive. Anthropic will now be racing to ship their roadmap before competitors can react to what they’ve seen. Features that were weeks or months away from announcement may need to be accelerated.
The deeper issue is one of trust. Enterprise customers, the companies paying the biggest bills, need to know that a vendor handles their data and their IP with care. This is the second identical mistake in 14 months. That pattern is hard to explain away.
Anthropic says they’re rolling out measures to prevent this from happening again.
They said that in February 2025 too.
The Speed of the Developer Internet Is Wild
I want to step back for a second and just appreciate what happened here from a pure “wow, humans are something” standpoint.
Between the time Chaofan Shou posted his tweet at 3:23 AM and when most people arrived at their desks that morning, the developer community had already downloaded the code, mirrored it globally, reverse-engineered the key features, written analysis threads, started two rewrites in different programming languages, and built one of the fastest-growing GitHub repositories in history.
That’s not hours. That’s before breakfast.
The internet doesn’t forget. It doesn’t wait. And it moves orders of magnitude faster than any legal team.
What You Should Actually Do
If you use Claude Code and installed it via npm between midnight and 3:29 AM UTC on March 31, there’s a separate issue you need to know about. A malicious version of a common software package called axios was coincidentally published around the same time and may have been pulled in alongside Claude Code. It contained a type of malware called a Remote Access Trojan.
Check your project files for the package names plain-crypto-js or axios versions 1.14.1 or 0.30.4. If you find them, treat it seriously.
For everyone else, there’s nothing urgent to do. Your Claude conversations and account data are safe.
What I’d pay attention to going forward is how fast Anthropic starts shipping the features that just got exposed. KAIROS, voice mode, and the multi-agent coordinator are all clearly real and close to done. That roadmap is now public whether Anthropic likes it or not.
We’re about to see a very interesting sprint.
FAQ
Q: Was the Claude Code leak a hack? No. Anthropic confirmed it was a packaging mistake by a human on their release team. A debugging file that should have stayed private was accidentally included in a public software update.
Q: Does the leak expose my personal Claude data or conversations? No. Anthropic confirmed no user data, conversations, or credentials were exposed. The leak was of internal engineering code, not user information.
Q: What is a feature flag and why does it matter that 44 were leaked? A feature flag is like a hidden light switch in software. Developers build a feature completely, then flip the switch when they’re ready to release it. The leaked code contained 44 switches for features Anthropic had built but not announced, including an always-on background agent mode, voice control, and multi-agent coordination. Competitors now know what’s coming.
Q: Can Anthropic take down the leaked code from the internet? They’ve tried. GitHub complied with takedown requests. But the code was quickly mirrored to decentralized platforms and rewritten from scratch in Python and Rust, which are harder to target legally. The 512,000 lines of code are effectively permanent at this point.
Q: What is claw-code? Claw-code is a Python rewrite of Claude Code’s core architecture, created by developer Sigrid Jin within hours of the leak. It hit 50,000 GitHub stars in two hours, reportedly the fastest a repository has ever reached that milestone in GitHub’s history. A Rust version is also in progress.
Q: Is this the first time Anthropic has leaked Claude Code? No. A nearly identical source map leak happened with an earlier version of Claude Code in February 2025. This is the second time the same mistake has been made.

