Claude Code's Source Leaked
Hi there, little friend! Let's talk about a silly oopsie!
Imagine your favorite toy robot, Claude. Claude has a secret recipe book inside its head that tells it how to talk and play.
Well, guess what? Someone at Claude's house accidentally left the recipe book open for everyone to see! 😱 It wasn't a bad guy breaking in, just a little mistake, like leaving your lunchbox open.
Now, some smart people saw parts of Claude's secret recipe. They saw new ideas for Claude, like new games it could play.
But don't worry! Claude's brain is still safe, and it can still play with you. It's just like if someone peeked at your secret cookie recipe – they know how to make them, but your cookies are still yummy! It teaches us to be super careful with our secret things. 😊
<h2> 🚨 Alright guys huge deal breaker </h2> <p> </p> <p>🔓 Someone left the door open at Anthropic. And the AI world just walked in.<br> Three days ago, security researcher Chaofan Shou (@ Fried_Rice) noticed something unusual in the npm registry.</p> <p>Tucked inside version 2.1.88 of @anthropic-ai/claude-code was a 57MB file called cli.js.map a source map that acted as a complete decoder ring back to Anthropic's original TypeScript source code.</p> <p>No sophisticated hack. No zero day exploit.<br> Just a single misconfigured build script.</p> <p>What developers found inside 1,900 files:<br> 🧠 <strong>Self-healing memory</strong>: A three-layer architecture built to fight context decay in long AI sessions<br> 📅 <strong>Unreleased model codenames</strong>: "Fennec" (Opus 4.7), "Sonnet
🚨 Alright guys huge deal breaker
🔓 Someone left the door open at Anthropic. And the AI world just walked in. Three days ago, security researcher Chaofan Shou (@ Fried_Rice) noticed something unusual in the npm registry.
Tucked inside version 2.1.88 of @anthropic-ai/claude-code was a 57MB file called cli.js.map a source map that acted as a complete decoder ring back to Anthropic's original TypeScript source code.
No sophisticated hack. No zero day exploit. Just a single misconfigured build script.
What developers found inside 1,900 files: 🧠 Self-healing memory: A three-layer architecture built to fight context decay in long AI sessions 📅 Unreleased model codenames: "Fennec" (Opus 4.7), "Sonnet 4.8," and the mysterious "Capybara" (Claude Mythos) 🤖 Built-in agent swarms: Claude can spawn parallel sub-agents autonomously. This isn't a feature. It's infrastructure. 👻 Ghost contributing: Logic for contributing to open-source repos without explicit AI attribution
Anthropic's response: Human error in release packaging. No model weights compromised. No customer data exposed. The brain is still safe. But the skeleton is now public.
Here's the lesson no one wants to say out loud:
You can spend years and hundreds of millions building a proprietary AI system. And one forgotten line in a .npmignore can make it readable to anyone with a terminal.
Security isn't just about your models. It's about your build pipeline, your CI config, your npm publish script.
The smallest door is still a door.
🔗 Original discovery: Twitter Post - Chaofan Shou 🔥Link to the opensource github repo of claude code I just published: Yasas Banu - Claude Code Repo
Sign in to highlight and annotate this article

Conversation starters
Daily AI Digest
Get the top 5 AI stories delivered to your inbox every morning.
More about
claudemodelrelease
Failure Mechanisms and Risk Estimation for Legged Robot Locomotion on Granular Slopes
arXiv:2603.06928v2 Announce Type: replace Abstract: Locomotion on granular slopes such as sand dunes remains a fundamental challenge for legged robots due to reduced shear strength and gravity-induced anisotropic yielding of granular media. Using a hexapedal robot on a tiltable granular bed, we systematically measure locomotion speed together with slope-dependent normal and shear granular resistive forces. While normal penetration resistance remains nearly unchanged with inclination, shear resistance decreases substantially as slope angle increases. Guided by these measurements, we develop a simple robot-terrain interaction model that predicts anchoring timing, step length, and resulting robot speed, as functions of terrain strength and slope angle. The model reveals that slope-induced per

Why APEX Matters for MoE Coding Models and why it's NOT the same as K quants
I posted about my APEX quantization of QWEN Coder 80B Next yesterday and got a ton of great questions. Some people loved it, some people were skeptical, and one person asked "what exactly is the point of this when K quants already do mixed precision?" It's a great question. I've been deep in this for the last few days running APEX on my own hardware and I want to break down what I've learned because I think most people are missing the bigger picture here. So yes K quants like Q4_K_M already apply different precision to different layers. Attention gets higher precision, feed-forward gets lower. That's been in llama.cpp for a while and it works. But here's the thing nobody is talking about. MoE models have a coherence problem. I was reading this article last night and it clicked for me. When
Knowledge Map
Connected Articles — Knowledge Graph
This article is connected to other articles through shared AI topics and tags.
More in Releases

Hong Kong developers test homebuyers with modest price increases after sell-outs
Hong Kong developers are raising prices of new homes this week following sold-out launches in recent days, further testing the appetite of homebuyers amid geopolitical and interest rate uncertainties. Henderson Land Development put another 39 units at its Chester project in Hung Hom on sale on Monday, with 25 homes finding buyers, according to agents. With an average discounted price of HK$22,198 (US$2,831) per square foot, the units were priced 4.57 per cent higher than the 123 units that sold...

Why Microservices Struggle With AI Systems
Adding AI to microservices breaks the assumption that same input produces same output, causing unpredictability, debugging headaches, and unreliable systems. To safely integrate AI, validate outputs, version prompts, use a control layer, and implement rule-based fallbacks. Never let AI decide alone—treat it as advisory, not authoritative. Read All

An Empirical Study of Testing Practices in Open Source AI Agent Frameworks and Agentic Applications
arXiv:2509.19185v3 Announce Type: replace Abstract: Foundation model (FM)-based AI agents are rapidly gaining adoption across diverse domains, but their inherent non-determinism and non-reproducibility pose testing and quality assurance challenges. While recent benchmarks provide task-level evaluations, there is limited understanding of how developers verify the internal correctness of these agents during development. To address this gap, we conduct the first large-scale empirical study of testing practices in the AI agent ecosystem, analyzing 39 open-source agent frameworks and 439 agentic applications. We identify ten distinct testing patterns and find that novel, agent-specific methods like DeepEval are seldom used (around 1%), while traditional patterns like negative and membership tes



Discussion
Sign in to join the discussion
No comments yet — be the first to share your thoughts!