BMG sues Anthropic for copyright infringement in Claude training

Music rights company BMG has filed a lawsuit against AI firm Anthropic, alleging unauthorized use of song lyrics to train its Claude chatbot. The complaint claims infringement dates back to Anthropic's founding and involves works by artists including Justin Bieber and Bruno Mars. BMG seeks damages up to $150,000 per infringed work.

BMG filed the lawsuit on March 17, 2026, in federal court in California, accusing Anthropic of copyright infringement by using lyrics from BMG-managed compositions to train Claude. The 47-page complaint details how Anthropic allegedly scraped text from public websites, illegal pirate libraries, MusicMatch, LyricFind, and sheet music books since its 2021 founding by former OpenAI staffers. Specific examples include Claude outputting significant portions of lyrics to Bruno Mars' 'Uptown Funk,' the Rolling Stones' 'You Can’t Always Get What You Want,' Louis Armstrong’s 'What A Wonderful World,' Ariana Grande’s '7 Rings,' and 3 Doors Down’s 'Kryptonite.' Even prompts for original lyrics reportedly generate derivatives or mash-ups based on these works, the suit claims. A non-exhaustive list in the complaint identifies 467 allegedly infringed songs, potentially leading to at least $70 million in damages at the statutory maximum of $150,000 per work. BMG states it sent a cease-and-desist letter in December 2025, which Anthropic did not answer, and never authorized the use. 'Anthropic has blatantly violated the copyright laws and caused direct harm to BMG and the songwriters it proudly represents,' the lawsuit reads. 'Generations of inventors have brought revolutionary new products to market while complying with copyright law. Anthropic’s rapid development of its new technology is no excuse for its egregious law-breaking.' A BMG spokesperson added, 'Protecting the rights of those who entrust their life’s work to BMG is essential... copyright protection and fair remuneration are non-negotiable.' The suit also alleges secondary liability for users' infringements and seeks disclosure of Anthropic's training data. This follows similar cases by Universal Music Publishing Group, Concord Music, and ABKCO Music since 2023. Anthropic, recently valued at $380 billion after raising $30 billion, did not comment. It maintains fair use defenses in ongoing litigation.

Relaterede artikler

Courtroom illustration of Anthropic suing the US DoD over AI supply-chain risk label, featuring executives, documents, and Claude AI elements.
Billede genereret af AI

Anthropic sues US defense department over supply chain risk designation

Rapporteret af AI Billede genereret af AI

Anthropic has filed a federal lawsuit against the US Department of Defense, challenging its recent label of the AI company as a supply-chain risk. The dispute stems from a contract disagreement over the use of Anthropic's Claude AI for military purposes, including restrictions on mass surveillance and autonomous weapons. The company argues the designation violates free speech and due process rights.

Anthropic has announced that its AI chatbot Claude will remain free of advertisements, contrasting sharply with rival OpenAI's recent decision to test ads in ChatGPT. The company launched a Super Bowl ad campaign mocking AI assistants that interrupt conversations with product pitches. This move highlights growing tensions in the competitive AI landscape.

Rapporteret af AI

US President Donald Trump has directed federal agencies to immediately cease using Anthropic's Claude AI, following the company's refusal to allow its use for mass surveillance or autonomous weapons. The order includes a six-month phaseout period. This decision stems from ongoing clashes between Anthropic and the Department of Defense over AI restrictions.

The Pentagon is considering ending its relationship with AI firm Anthropic due to disagreements over safeguards. Anthropic, the maker of the Claude AI model, has raised concerns about hard limits on fully autonomous weapons and mass domestic surveillance. This stems from the Pentagon's desire to apply AI models in warfighting scenarios, which Anthropic has declined.

Rapporteret af AI

US President Donald Trump has directed federal agencies to immediately cease using Anthropic's AI technology. The order follows a dispute with the Pentagon, where the company refused unconditional military use of its Claude models. Anthropic has vowed to challenge the Pentagon's ban in court.

Anthropic has launched the Anthropic Institute, a new research initiative, and opened its first Public Policy office in Washington, DC, this spring. These steps follow the AI company's recent federal lawsuit against the US government over a Defense Department supply chain risk designation tied to a contract dispute.

Rapporteret af AI

Anthropic's CEO Dario Amodei stated that the company will not comply with the Pentagon's request to remove safeguards from its AI models, despite threats of exclusion from defense systems. The dispute centers on preventing the AI's use in autonomous weapons and domestic surveillance. The firm, which has a $200 million contract with the Department of Defense, emphasizes its commitment to ethical AI use.

 

 

 

Dette websted bruger cookies

Vi bruger cookies til analyse for at forbedre vores side. Læs vores privatlivspolitik for mere information.
Afvis