Claude Code's Source Didn't Leak. It Was Already Public for Years.
<p>I build a JavaScript obfuscation tool (<a href="https://afterpack.dev" rel="noopener noreferrer">AfterPack</a>), so when the Claude Code "leak" hit <a href="https://venturebeat.com/technology/claude-codes-source-code-appears-to-have-leaked-heres-what-we-know" rel="noopener noreferrer">VentureBeat</a>, <a href="https://fortune.com/2026/03/31/anthropic-source-code-claude-code-data-leak-second-security-lapse-days-after-accidentally-revealing-mythos/" rel="noopener noreferrer">Fortune</a>, and <a href="https://www.theregister.com/2026/03/31/anthropic_claude_code_source_code/" rel="noopener noreferrer">The Register</a> this week, I did what felt obvious — I analyzed the supposedly leaked code to see what was actually protected.</p> <p>I <a href="https://afterpack.dev/blog/claude-code-source-
I build a JavaScript obfuscation tool (AfterPack), so when the Claude Code "leak" hit VentureBeat, Fortune, and The Register this week, I did what felt obvious — I analyzed the supposedly leaked code to see what was actually protected.
I wrote a detailed breakdown on the AfterPack blog. Here's the core of it.
What Happened
A source map file — a standard debugging artifact defined in ECMA-426 — was accidentally included in version 2.1.88 of the @anthropic-ai/claude-code package on npm. Security researcher Chaofan Shou spotted it, and within 24 hours a clean-room Rust rewrite hit 110K GitHub stars and a breakdown site (ccleaks.com) cataloged every hidden feature.
This is the second time — a nearly identical source map leak happened in February 2025.
The Code Was Already There
Claude Code ships as a single bundled cli.js on npm — 13MB, 16,824 lines of JavaScript. It's been publicly accessible since launch. You can view it right now at unpkg.com.
I analyzed it. It's minified, not obfuscated. Here's what that means in practice:
Technique Present?
Variable name mangling Yes (standard minification)
Whitespace removal Yes (standard minification)
String encryption/encoding No
Control flow flattening No
Dead code injection No
Self-defending / anti-tamper No
Property name mangling No
All 148,000+ string literals sit in plaintext — system prompts, tool descriptions, behavioral instructions.
I Asked Claude to Deobfuscate Itself
This is the part that got me. I pointed Claude — Anthropic's own model — at its own minified cli.js and it just... explained it.
Using AST-based extraction, we parsed the full 13MB file in 1.47 seconds and pulled out 147,992 strings. System prompts, tool descriptions, 837 telemetry events (all prefixed with tengu_ — Claude Code's internal codename), 504 environment variables, a DataDog API key._
Geoffrey Huntley published a full cleanroom transpilation of Claude Code months before this leak using a similar approach — LLMs converting minified JS to readable TypeScript. His deobfuscation repo on GitHub demonstrates the technique.
What Source Maps Actually Added
To be fair, source maps did surface some genuinely sensitive stuff:
-
Internal code comments and TODOs
-
The full 1,884-file project tree with original filenames
-
Feature flags with codenames like tengu_amber_flint and tengu_cobalt_frost
-
KAIROS — an unreleased autonomous daemon mode
-
Anti-distillation mechanisms that inject decoy tools to poison training data
That's real exposure. But the actual code logic was already there in cli.js.
This Happens Everywhere
I ran our Security Scanner on GitHub.com and found email addresses and internal URLs in their production JavaScript and source maps. Same with claude.ai. Same class of exposure, zero headlines.
AI Makes This Urgent
The reality is simple: minification was never security. It's a size optimization that bundlers like esbuild, Webpack, and Rollup do by default. Variable renaming slows down human readers but LLMs read minified code like you read formatted code.
System prompts are the new trade secrets. Telemetry names reveal product roadmaps. Environment variables expose what you're not ready to ship. And every JavaScript application — React frontends, Electron apps, Node.js CLIs — ships code that AI can now analyze trivially.
You can check what your site exposes: npx afterpack audit https://your-site.com
Originally published on AfterPack.
DEV Community
https://dev.to/nikitaeverywhere/claude-codes-source-didnt-leak-it-was-already-public-for-years-34leSign in to highlight and annotate this article

Conversation starters
Daily AI Digest
Get the top 5 AI stories delivered to your inbox every morning.
More about
claudemodeltrainingWhy Developers Should Build Native AI Applications
<blockquote> <p>The Web5/WebAI Model of <strong>Ascoos OS</strong>: A Self‑Sufficient, User‑First Approach to Artificial Intelligence on the Web</p> </blockquote> <h2> Introduction </h2> <p>Artificial Intelligence has become a core component of the modern web. However, the way it is integrated into most applications remains dependent on external services and remote systems, creating limitations in performance, privacy, cost, and independence.</p> <p>Ascoos OS proposes a different philosophy: a web where AI is not a service located somewhere else, but a capability owned by the user. A web where applications do not rely on third parties to function, and where data never leaves the user’s environment. This is the <strong>Web5/WebAI</strong> model.</p> <h2> 1. The Need for Native AI Applicatio
I tried every major LLM observability platform. Traceport changed how I think about AI gateways.
<p><em>Most tools just log your prompts. Traceport routes, caches, evaluates, and observes — all through one API.</em></p> <p>When I started building production LLM applications, my monitoring setup was embarrassingly simple: a few console.log statements around my OpenAI calls and a rough sense of what things cost from the billing page. That worked fine for prototypes. It broke down the moment real users showed up.</p> <p>Over the last year, I've worked my way through most of the major LLM observability platforms — LangSmith, Langfuse, Helicone, Arize Phoenix, and others. Each one solved a piece of the puzzle. Then I found Traceport, and it reframed how I think about the whole problem.</p> <h2> Why LLM observability is different from regular monitoring </h2> <p>Traditional APM tools — Data
This Is How I Automated My Dev Workflow with MCPs - GitHub, Notion & Jira (And Saved Hours)
<p>AI agents are no longer a novelty - they’re becoming a practical way to speed up engineering work. But there’s a catch: agents don’t do anything useful unless they can access your real systems securely - documentation, tickets, code, deployment details, and operational logs.</p> <p>That’s where MCP (Model Context Protocol) changes the game. MCP provides a standard way to connect AI systems to external tools and data sources. Yet, once you actually start wiring MCP into an organization, a new problem appears: managing many MCP servers, many permissions, and many integrations across teams - without turning your platform into a fragile routing monster.</p> <p>This is the gap <a href="https://port.io?utm_source=devto&utm_medium=advocacy&utm_campaign=mcp-devopsq2" rel="noopener noref
Knowledge Map
Connected Articles — Knowledge Graph
This article is connected to other articles through shared AI topics and tags.
More in Products
Why Developers Should Build Native AI Applications
<blockquote> <p>The Web5/WebAI Model of <strong>Ascoos OS</strong>: A Self‑Sufficient, User‑First Approach to Artificial Intelligence on the Web</p> </blockquote> <h2> Introduction </h2> <p>Artificial Intelligence has become a core component of the modern web. However, the way it is integrated into most applications remains dependent on external services and remote systems, creating limitations in performance, privacy, cost, and independence.</p> <p>Ascoos OS proposes a different philosophy: a web where AI is not a service located somewhere else, but a capability owned by the user. A web where applications do not rely on third parties to function, and where data never leaves the user’s environment. This is the <strong>Web5/WebAI</strong> model.</p> <h2> 1. The Need for Native AI Applicatio
I tried every major LLM observability platform. Traceport changed how I think about AI gateways.
<p><em>Most tools just log your prompts. Traceport routes, caches, evaluates, and observes — all through one API.</em></p> <p>When I started building production LLM applications, my monitoring setup was embarrassingly simple: a few console.log statements around my OpenAI calls and a rough sense of what things cost from the billing page. That worked fine for prototypes. It broke down the moment real users showed up.</p> <p>Over the last year, I've worked my way through most of the major LLM observability platforms — LangSmith, Langfuse, Helicone, Arize Phoenix, and others. Each one solved a piece of the puzzle. Then I found Traceport, and it reframed how I think about the whole problem.</p> <h2> Why LLM observability is different from regular monitoring </h2> <p>Traditional APM tools — Data
This Is How I Automated My Dev Workflow with MCPs - GitHub, Notion & Jira (And Saved Hours)
<p>AI agents are no longer a novelty - they’re becoming a practical way to speed up engineering work. But there’s a catch: agents don’t do anything useful unless they can access your real systems securely - documentation, tickets, code, deployment details, and operational logs.</p> <p>That’s where MCP (Model Context Protocol) changes the game. MCP provides a standard way to connect AI systems to external tools and data sources. Yet, once you actually start wiring MCP into an organization, a new problem appears: managing many MCP servers, many permissions, and many integrations across teams - without turning your platform into a fragile routing monster.</p> <p>This is the gap <a href="https://port.io?utm_source=devto&utm_medium=advocacy&utm_campaign=mcp-devopsq2" rel="noopener noref
ML Hit 99% Accuracy on Yield Prediction — The Factory Floor Ignored It
<h1> ML Hit 99% Accuracy on Yield Prediction — The Factory Floor Ignored It </h1> <p>The pitch to bring ML into semiconductor FAB (fabrication facility) yield prediction has exploded over the past two years. Dig through ArXiv and you'll find N-BEATS+GNN for anomaly prediction, Transformer-based SPC precursor detection, semi-supervised defect segmentation, statistical difference scores for tool matching — no shortage of methods.</p> <p>Every paper reports high accuracy on test data. Some claim F1 > 0.9, AUC 0.99, classification accuracy in the 99% range. By the numbers, this looks like a solved problem.</p> <p>But the factory floor won't use them.</p> <p>Not because accuracy is insufficient. Because <strong>how</strong> accuracy is achieved doesn't match how production decisions are made. T

Discussion
Sign in to join the discussion
No comments yet — be the first to share your thoughts!