Anthropic's Claude Code CLI source code leaks online

Anthropic's Claude Code command line interface source code has leaked online after a packaging error in a recent release. The incident exposed over 512,000 lines of code from nearly 2,000 TypeScript files. The company described it as human error with no sensitive data involved.

Anthropic released version 2.1.88 of its Claude Code npm package on March 31, 2026. The package inadvertently included a source map file, granting access to the full codebase. Security researcher Chaofan Shou first highlighted the issue on X, sharing a link to an archive of the files. The code soon appeared in a public GitHub repository, which has been forked tens of thousands of times since then. No customer data or credentials were exposed, according to Anthropic officials, who called it a release packaging issue caused by human error, not a security breach. The company stated: “Earlier today, a Claude Code release included some internal source code. No sensitive customer data or credentials were involved or exposed. This was a release packaging issue caused by human error, not a security breach. We’re rolling out measures to prevent this from happening again.” Developers quickly began analyzing the leaked code. For instance, one overview detailed Claude Code’s memory architecture, including background memory rewriting and validation steps. Another analysis noted around 40,000 lines for a plugin-like tool system and 46,000 for the query system, describing it as a sophisticated, production-grade developer tool beyond a simple API wrapper. Prior community efforts had partially reverse-engineered Claude Code, but this leak provides unprecedented completeness. The exposure offers competitors insights into Anthropic’s architecture and potential vulnerabilities in its guardrails.

Verwandte Artikel

Dramatic illustration of Anthropic imposing a paywall on Claude AI, blocking third-party agents from overloaded servers.
Bild generiert von KI

Anthropic ends unlimited Claude access via third-party agents, requires extra payments for heavy use

Von KI berichtet Bild generiert von KI

Anthropic has restricted unlimited access to its Claude AI models through third-party agents like OpenClaw, requiring heavy users to pay extra via API keys or usage bundles starting April 4, 2026. The policy shift, announced over the weekend, addresses severe system strain from high-volume agent tools previously covered under $20 monthly subscriptions.

Anthropic has confirmed the leak of more than 512,000 lines of source code for its Claude Code tool. The disclosure reveals disabled features hinting at future developments, including a persistent background agent called Kairos. Observers examining the code also found references to stealth modes and a virtual assistant named Buddy.

Von KI berichtet

Anthropic has limited access to its Claude Mythos Preview AI model due to its superior ability to detect and exploit software vulnerabilities, while launching Project Glasswing—a consortium with over 45 tech firms including Apple, Google, and Microsoft—to collaboratively patch flaws and bolster defenses. The announcement follows recent data leaks at the firm.

Anthropic has launched a legal plugin for its Claude Cowork tool, prompting concerns among dedicated legal AI providers. The plugin offers useful features for contract review and compliance but falls short of replacing specialized platforms. South African firms face additional hurdles due to data protection regulations.

Von KI berichtet

Anthropic is temporarily doubling usage limits for its Claude AI chatbot during off-peak hours from March 13 to March 27. The promotion applies to Free, Pro, Max, and Team plan users, excluding Enterprise plans. It activates automatically across web, desktop, mobile, and integrated apps.

Anthropic's Claude AI app has hit the top spot on Apple's App Store free apps chart, overtaking ChatGPT and Gemini, fueled by public support following President Trump's federal ban on the tool over Anthropic's AI safety refusals.

Von KI berichtet

Researchers have identified three high-risk vulnerabilities in Claude.ai. These enable an end-to-end attack chain that exfiltrates sensitive information without the user's knowledge. A legitimate Google ad could trigger data exfiltration.

 

 

 

Diese Website verwendet Cookies

Wir verwenden Cookies für Analysen, um unsere Website zu verbessern. Lesen Sie unsere Datenschutzrichtlinie für weitere Informationen.
Ablehnen