Anthropic LEAKS AI Secrets TWICE — China Grabs Everything

Anthropic’s repeated incompetence just handed America’s AI edge to Chinese developers and rivals, exposing the hypocrisy of Big Tech elites who preach safety while fumbling their own secrets.

Story Highlights

  • Anthropic leaked 500,000 lines of proprietary Claude Code source code twice in 14 months via preventable npm packaging errors, handing advanced AI tools to global competitors.
  • Security researcher Chaofan Shou’s X post drew 33 million views, sparking a GitHub mirroring frenzy that made the code impossible to contain.
  • Revealed unreleased features like “Undercover Mode,” 44 feature flags, and autonomous agents—internal perks not available to everyday American developers.
  • Chinese developers, barred by Anthropic’s own restrictions, grabbed the code en masse, fueling U.S.-China tech tensions under President Trump’s second term.
  • Company downplays as “human error” with no customer data lost, but the irony undermines their AI safety leadership claims amid endless foreign entanglements.

Leak Details and Timeline

On March 31, 2026, Anthropic released Claude Code version 2.1.88 to npm, the world’s largest software registry. The package included a 60MB source-map file, cli.js.map, linking to 1,906 unobfuscated TypeScript files in cloud storage. These files totaled 500,000 to 512,000 lines of proprietary code. Security researcher Chaofan Shou, a Fuzzland intern, discovered the exposure and posted about it on X, amassing 33 million views within days. Hours later, GitHub repositories mirrored the code, gaining over 1,100 stars and 1,900 forks initially.

This marked the second identical leak; a prior incident in February 2025 involved the same npm source-map flaw, which Anthropic fixed only by yanking the version without overhauling processes. Built in TypeScript for professional developers, Claude Code relies on npm distribution. Source-maps, meant for debugging, must be stripped from production releases to protect intellectual property. Anthropic’s failure here echoes broader operational lapses at firms claiming AI safety leadership.

Key Features Exposed

The leaked code unveiled unreleased capabilities, including 44 hidden feature flags, “Undercover Mode” for stealth contributions, persistent assistants, multi-agent tools, session review, remote control, and roadmaps for longer autonomous tasks. Analysts noted two tiers of AI: restricted public versions versus powerful internal ones for elites. These enterprise-focused tools could accelerate rivals’ development, eroding Anthropic’s competitive edge. Everyday American coders, burdened by high energy costs and globalist tech overreach, gain nothing while foreign actors benefit freely.

Geopolitical fallout intensified as Chinese developers, despite Anthropic’s prior restrictions on adversarial nations, frenzy-mirrored the code. This undermines U.S. technological sovereignty in President Trump’s second term, where MAGA priorities demand America First innovation over handouts to competitors. The rapid spread highlights power shifting from corporate gatekeepers to the open-source community, bypassing takedown efforts.

Anthropic’s Response and Impacts

Anthropic acknowledged the issue in a statement to Axios, calling it a human packaging error, not a security breach, with no customer data or credentials compromised. The company implemented fixes, issued npm updates, and sent takedown notices, but widespread mirroring rendered containment impossible. External developers now reverse-engineer the code publicly, as seen in ongoing analyses.

https://twitter.com/chaofanshou/status/1774356789012345678

Short-term, rivals gain roadmap insights, damaging Anthropic’s reputation as a safety pioneer. Long-term, it accelerates industry progress, questions npm hygiene for AI tools, and fuels “two-tier AI” debates—public scraps for the masses, premium for insiders. Economically, competitors benefit; politically, it spotlights U.S.-China tensions. Optimists see democratization; skeptics warn of aiding adversaries, a common-sense concern for limited-government conservatives wary of Big Tech overpromising while underdelivering security.

Sources:

Axios: Anthropic leaked source code for its AI coding tool

NDTV: Anthropic’s AI coding tool leaks its own source code for the second time in a year

SCMP: Anthropic’s AI code leak ignites frenzy among Chinese developers