BMG sues Anthropic for copyright infringement in Claude training

Music rights company BMG has filed a lawsuit against AI firm Anthropic, alleging unauthorized use of song lyrics to train its Claude chatbot. The complaint claims infringement dates back to Anthropic's founding and involves works by artists including Justin Bieber and Bruno Mars. BMG seeks damages up to $150,000 per infringed work.

BMG filed the lawsuit on March 17, 2026, in federal court in California, accusing Anthropic of copyright infringement by using lyrics from BMG-managed compositions to train Claude. The 47-page complaint details how Anthropic allegedly scraped text from public websites, illegal pirate libraries, MusicMatch, LyricFind, and sheet music books since its 2021 founding by former OpenAI staffers. Specific examples include Claude outputting significant portions of lyrics to Bruno Mars' 'Uptown Funk,' the Rolling Stones' 'You Can’t Always Get What You Want,' Louis Armstrong’s 'What A Wonderful World,' Ariana Grande’s '7 Rings,' and 3 Doors Down’s 'Kryptonite.' Even prompts for original lyrics reportedly generate derivatives or mash-ups based on these works, the suit claims. A non-exhaustive list in the complaint identifies 467 allegedly infringed songs, potentially leading to at least $70 million in damages at the statutory maximum of $150,000 per work. BMG states it sent a cease-and-desist letter in December 2025, which Anthropic did not answer, and never authorized the use. 'Anthropic has blatantly violated the copyright laws and caused direct harm to BMG and the songwriters it proudly represents,' the lawsuit reads. 'Generations of inventors have brought revolutionary new products to market while complying with copyright law. Anthropic’s rapid development of its new technology is no excuse for its egregious law-breaking.' A BMG spokesperson added, 'Protecting the rights of those who entrust their life’s work to BMG is essential... copyright protection and fair remuneration are non-negotiable.' The suit also alleges secondary liability for users' infringements and seeks disclosure of Anthropic's training data. This follows similar cases by Universal Music Publishing Group, Concord Music, and ABKCO Music since 2023. Anthropic, recently valued at $380 billion after raising $30 billion, did not comment. It maintains fair use defenses in ongoing litigation.

Makala yanayohusiana

Courtroom illustration of Anthropic suing the US DoD over AI supply-chain risk label, featuring executives, documents, and Claude AI elements.
Picha iliyoundwa na AI

Anthropic sues US defense department over supply chain risk designation

Imeripotiwa na AI Picha iliyoundwa na AI

Anthropic has filed a federal lawsuit against the US Department of Defense, challenging its recent label of the AI company as a supply-chain risk. The dispute stems from a contract disagreement over the use of Anthropic's Claude AI for military purposes, including restrictions on mass surveillance and autonomous weapons. The company argues the designation violates free speech and due process rights.

Anthropic has restricted unlimited access to its Claude AI models through third-party agents like OpenClaw, requiring heavy users to pay extra via API keys or usage bundles starting April 4, 2026. The policy shift, announced over the weekend, addresses severe system strain from high-volume agent tools previously covered under $20 monthly subscriptions.

Imeripotiwa na AI

US President Donald Trump has directed federal agencies to immediately cease using Anthropic's Claude AI, following the company's refusal to allow its use for mass surveillance or autonomous weapons. The order includes a six-month phaseout period. This decision stems from ongoing clashes between Anthropic and the Department of Defense over AI restrictions.

The Pentagon has formally notified AI company Anthropic that it is deemed a “supply chain risk,” a rare designation that critics say is typically aimed at adversary-linked technology. The move follows a breakdown in negotiations over whether the U.S. military can use Anthropic’s Claude models for all lawful purposes, versus contractual limits the company says are needed to prevent fully autonomous weapons and mass domestic surveillance.

Imeripotiwa na AI

The Pentagon is considering ending its relationship with AI firm Anthropic due to disagreements over safeguards. Anthropic, the maker of the Claude AI model, has raised concerns about hard limits on fully autonomous weapons and mass domestic surveillance. This stems from the Pentagon's desire to apply AI models in warfighting scenarios, which Anthropic has declined.

Anthropic has released a beta add-on bringing its Claude AI assistant to Microsoft Word, available now to customers on Team and Enterprise plans. The integration allows users to generate new content, edit documents, and handle comments within the app. It offers an alternative to Microsoft's Copilot.

Imeripotiwa na AI

In the wake of Anthropic's unveiling of its powerful Claude Mythos AI—capable of detecting and exploiting software vulnerabilities—the US Treasury Secretary has convened top bank executives to highlight escalating AI-driven cyber threats. The move underscores growing concerns as the AI is restricted to a tech coalition via Project Glasswing.

 

 

 

Tovuti hii inatumia vidakuzi

Tunatumia vidakuzi kwa uchambuzi ili kuboresha tovuti yetu. Soma sera ya faragha yetu kwa maelezo zaidi.
Kataa