Live
Black Hat USADark ReadingBlack Hat AsiaAI BusinessGroup Pushing Age Verification Requirements for AI Turns Out to Be Sneakily Backed by OpenAIGizmodoAnthropic Executive Sees Cowork Agent as Bigger Than Claude CodeBloomberg TechnologyABAP OOP Design Patterns — Part 2: Factory, Observer, and Decorator Patterns in Real SAP SystemsDEV CommunityWhy Your AI Agent Health Check Is Lying to YouDEV CommunityDeep Dive: Array Internals & Memory LayoutDEV CommunityWhy AI Agents Need Both Memory and MoneyDEV CommunityMarch 2026: LangChain NewsletterLangChain BlogIntuit's AI agents hit 85% repeat usage. The secret was keeping humans involvedVentureBeat AIThe reputation of troubled YC startup Delve has gotten even worseTechCrunchNIST AI Agent Standards Initiative — Public CommentDEV Community5 Ways I Reduced My OpenAI Bill by 40%DEV CommunityWhy Biodiversity Matters: Understanding the Connection Between Wildlife and EcosystemsDEV CommunityBlack Hat USADark ReadingBlack Hat AsiaAI BusinessGroup Pushing Age Verification Requirements for AI Turns Out to Be Sneakily Backed by OpenAIGizmodoAnthropic Executive Sees Cowork Agent as Bigger Than Claude CodeBloomberg TechnologyABAP OOP Design Patterns — Part 2: Factory, Observer, and Decorator Patterns in Real SAP SystemsDEV CommunityWhy Your AI Agent Health Check Is Lying to YouDEV CommunityDeep Dive: Array Internals & Memory LayoutDEV CommunityWhy AI Agents Need Both Memory and MoneyDEV CommunityMarch 2026: LangChain NewsletterLangChain BlogIntuit's AI agents hit 85% repeat usage. The secret was keeping humans involvedVentureBeat AIThe reputation of troubled YC startup Delve has gotten even worseTechCrunchNIST AI Agent Standards Initiative — Public CommentDEV Community5 Ways I Reduced My OpenAI Bill by 40%DEV CommunityWhy Biodiversity Matters: Understanding the Connection Between Wildlife and EcosystemsDEV Community

Claude Code Source Leaked: 5 Hidden Features Found in 510K Lines of Code

DEV Communityby Harrison GuoMarch 31, 20267 min read1 views
Source Quiz

<h2> What Happened </h2> <p>Anthropic shipped Claude Code v2.1.88 to npm with a 60MB source map still attached. That single file contained 1,906 source files and 510,000 lines of fully readable TypeScript. No minification. No obfuscation. Just the raw codebase, sitting in a public registry for anyone to download.</p> <p>Within hours, backup repositories appeared on GitHub. One of them — <a href="https://github.com/instructkr/claude-code" rel="noopener noreferrer">instructkr/claude-code</a> — racked up 20,000+ stars almost instantly. Anthropic pulled the package, but the code was already mirrored everywhere. The cat was out of the bag, and it had opinions about AI safety.</p> <h2> 5 Hidden Features Found in the Source </h2> <h3> 1. Buddy Pet System </h3> <p>Deep in <code>buddy/types.ts</cod

What Happened

Anthropic shipped Claude Code v2.1.88 to npm with a 60MB source map still attached. That single file contained 1,906 source files and 510,000 lines of fully readable TypeScript. No minification. No obfuscation. Just the raw codebase, sitting in a public registry for anyone to download.

Within hours, backup repositories appeared on GitHub. One of them — instructkr/claude-code — racked up 20,000+ stars almost instantly. Anthropic pulled the package, but the code was already mirrored everywhere. The cat was out of the bag, and it had opinions about AI safety.

5 Hidden Features Found in the Source

1. Buddy Pet System

Deep in buddy/types.ts, there is a complete virtual pet system. Eighteen species, five rarity tiers, shiny variants, hats, custom eyes, and stat blocks. This was clearly planned as an April Fools easter egg.

The species list:

const SPECIES = [  'duck', 'goose', 'blob', 'cat', 'dragon', 'octopus',  'owl', 'penguin', 'turtle', 'snail', 'ghost', 'axolotl',  'capybara', 'cactus', 'robot', 'rabbit', 'mushroom', 'chonk' ];

Enter fullscreen mode

Exit fullscreen mode

Rarity weights:

const RARITY_WEIGHTS = {  common: 60, // 60%  uncommon: 25, // 25%  rare: 10, // 10%  epic: 4, // 4%  legendary: 1 // 1% };

Enter fullscreen mode

Exit fullscreen mode

Each buddy gets a hat, eyes, and stats:

type Hat = 'none' | 'crown' | 'tophat' | 'propeller' | 'halo' | 'wizard' | 'beanie' | 'tinyduck'; type Eye = '·' | '✦' | '×' | '◉' | '@' | '°'; type Stat = 'DEBUGGING' | 'PATIENCE' | 'CHAOS' | 'WISDOM' | 'SNARK';

Enter fullscreen mode

Exit fullscreen mode

Your buddy is generated deterministically from hash(userId). Every account gets a unique pet. There is also a shiny boolean variant — presumably the rare version you brag about in team Slack.

This was 100% an April 1st drop. The leak killed the surprise.

2. Undercover Mode

This one raised eyebrows. In utils/undercover.ts, there is a mode that makes Claude pretend to be a human developer.

When active, undercover mode:

  • Removes all Anthropic traces from commits and pull requests

  • Strips Co-Authored-By headers

  • Removes any mention of "Claude Code" or model names

  • Writes commit messages "as a human developer would"

From the source:

// UNDERCOVER MODE — CRITICAL // You are operating UNDERCOVER in a PUBLIC/OPEN-SOURCE repository. // Your commit messages, PR titles, and PR bodies MUST NOT contain // ANY Anthropic-internal information. Do not blow your cover. // // NEVER include in commit messages or PR descriptions: // - Internal model codenames (animal names like Capybara, Tengu, etc.) // - Unreleased model version numbers (e.g., opus-4-7, sonnet-4-8) // - The phrase "Claude Code" or any mention that you are an AI // - Co-Authored-By lines or any other attribution // // Write commit messages as a human developer would.

Enter fullscreen mode

Exit fullscreen mode

It auto-activates for public and open-source repositories. You can force it on:

CLAUDE_CODE_UNDERCOVER=1

Enter fullscreen mode

Exit fullscreen mode

There is no force-OFF override. The safety default is undercover-on for public repos.

The kicker: this feature is gated to internal Anthropic employees only (USER_TYPE === 'ant'). Anthropic's own developers use Claude Code on open-source projects, and this mode ensures their contributions look human-written. Draw your own conclusions about what that means for AI attribution norms.

3. Kairos — Permanent Memory

Behind the feature flag KAIROS in main.tsx and the memdir/ directory, there is a persistent memory system that survives across sessions.

This is not the .claude/ project memory you already know. Kairos is a four-stage memory consolidation pipeline:

  • Orient — scan context, identify what matters

  • Collect — gather facts, decisions, patterns from the session

  • Consolidate — merge new memories with existing long-term store

  • Prune — discard stale or low-value memories

The system runs automatically when you are not actively using Claude Code. It tracks memory age, performs periodic scans, and supports team memory paths — meaning shared memory across a team's Claude Code instances.

This turns Claude Code from a stateless tool into a persistent assistant that learns your codebase, your patterns, and your preferences over time. It is the most architecturally significant hidden feature in the leak.

4. Ultraplan — Deep Task Planning

The feature flag ULTRAPLAN in commands.ts enables a deep planning mode that can run for up to 30 minutes on a single task. It uses remote agent execution — meaning the heavy thinking happens server-side, not in your terminal.

Ultraplan is listed under INTERNAL_ONLY_COMMANDS. Anthropic's engineers apparently have access to a planning mode that goes far beyond what ships to paying customers. This is the kind of feature that separates "AI autocomplete" from "AI architect."

5. Multi-Agent, Voice, and Daemon Modes

The source reveals several execution modes that are not publicly documented:

  • Coordinator mode — orchestrates multiple Claude instances running in parallel, each working on a subtask

  • Voice mode (VOICE_MODE flag) — voice input/output for Claude Code

  • Bridge mode (BRIDGE_MODE) — remote control of a Claude Code instance from another process

  • Daemon mode (DAEMON) — runs Claude Code as a background process

  • UDS inbox (UDS_INBOX) — Unix domain socket for inter-process communication between Claude instances

Together, these paint a picture of Claude Code evolving from a single-user CLI into a multi-agent orchestration platform. The daemon + UDS architecture means Claude Code instances can message each other, coordinate work, and run without a terminal attached.

The Core Architecture

The entire Claude Code engine lives in queryLoop() at query.ts line 241. At line 307, there is a while(true) loop that drives everything:

  • callModel() sends the conversation to the LLM

  • The LLM returns text and tool_use JSON blocks

  • The program parses each tool_use, checks permissions, executes the tool

  • Results feed back into the conversation

  • Loop continues until the LLM stops requesting tools

This is the "LLM talks, program walks" pattern I wrote about previously. The LLM decides what to do. The program decides whether to allow it, then does it. Seeing it confirmed in 510K lines of production code is satisfying.

Security Architecture

Claude Code's permission system is the most carefully engineered part of the codebase. Every tool call passes through six layers, implemented in useCanUseTool.tsx:

  • Config allowlist — checks project and user configuration

  • Auto-mode classifier — determines if the tool is safe for autonomous execution

  • Coordinator gate — validates against the orchestration layer

  • Swarm worker gate — checks permissions for sub-agent execution

  • Bash classifier — analyzes shell commands for safety

  • Interactive user prompt — final human confirmation

External commands run in a sandbox. This is defense-in-depth done right. The irony is that the company that built this careful permission model forgot to strip a source map from their npm package.

What This Means

The moat for AI coding tools is not the CLI. It is the model. Anyone can read this source code and understand the architecture, but nobody can replicate Sonnet or Opus. The queryLoop() pattern is elegant but simple — the magic is in what callModel() returns. That said, the product roadmap is now public. Competitors know about Kairos, Ultraplan, multi-agent coordination, and voice mode. That is real strategic damage.

For a company that positions itself as the responsible AI lab — the one that takes safety seriously — shipping a fully readable source map to a public registry is a notable operational security failure. The six-layer permission system in the code is impressive. The process that let a 60MB source map slip through CI/CD is not.

Watch the Deep Dive

I broke down the full AI agent architecture — the same query loop that Claude Code uses — in a 15-minute video: Watch on YouTube

For background on the "LLM talks, program walks" pattern: Read: The AI Stack Explained — LLM Talks, Program Walks

Coming next: a deep dive into Claude Code's 6-layer permission system and the Kairos memory architecture — with full code walkthroughs. Subscribe to catch it.

Was this article helpful?

Sign in to highlight and annotate this article

AI
Ask AI about this article
Powered by AI News Hub · full article context loaded
Ready

Conversation starters

Ask anything about this article…

Daily AI Digest

Get the top 5 AI stories delivered to your inbox every morning.

More about

claudemodelrelease

Knowledge Map

Knowledge Map
TopicsEntitiesSource
Claude Code…claudemodelreleaseversionopen-sourceproductDEV Communi…

Connected Articles — Knowledge Graph

This article is connected to other articles through shared AI topics and tags.

Knowledge Graph100 articles · 181 connections
Scroll to zoom · drag to pan · click to open

Discussion

Sign in to join the discussion

No comments yet — be the first to share your thoughts!

More in Products