Courtroom illustration of Anthropic suing the US DoD over AI supply-chain risk label, featuring executives, documents, and Claude AI elements.
Courtroom illustration of Anthropic suing the US DoD over AI supply-chain risk label, featuring executives, documents, and Claude AI elements.
Bild generiert von KI

Anthropic sues US defense department over supply chain risk designation

Bild generiert von KI

Anthropic has filed a federal lawsuit against the US Department of Defense, challenging its recent label of the AI company as a supply-chain risk. The dispute stems from a contract disagreement over the use of Anthropic's Claude AI for military purposes, including restrictions on mass surveillance and autonomous weapons. The company argues the designation violates free speech and due process rights.

The conflict between Anthropic and the US Department of Defense escalated in late February 2026, when the Pentagon sought broader access to Anthropic's Claude AI model for "all lawful purposes." Anthropic refused to remove safeguards prohibiting its use for mass domestic surveillance or fully autonomous weapons systems without human oversight. On February 26, CEO Dario Amodei stated that powerful AI enables the assembly of scattered data into comprehensive profiles of individuals at massive scale, underscoring the company's concerns.

By February 27, after Anthropic declined to alter its terms, Defense Secretary Pete Hegseth threatened to designate the company a supply-chain risk and cancel its $200 million contract. President Donald Trump then ordered all federal agencies to cease using Anthropic's technology. The Pentagon formalized the designation late last month, prompting Anthropic to file suit on March 9 in federal court. The lawsuit describes the actions as an "unprecedented and unlawful campaign of retaliation," asserting that "the Constitution does not allow the government to wield its enormous power to punish a company for its protected speech."

Pentagon officials maintain the issue is moot, as current laws prohibit such surveillance and the department has no plans for autonomous weapons. However, experts like Hamza Chaudhry of the Future of Life Institute called it a "real governance vacuum" and a wake-up call for Congress to enact clear regulations. Greg Nojeim of the Center for Democracy and Technology noted that AI models are "not reliable enough" for fully autonomous weapons, criticizing the Pentagon for rejecting expert advice.

In response, the Pentagon struck a deal with OpenAI, which included provisions against domestic surveillance of US persons. OpenAI CEO Sam Altman confirmed the tool would not be used by intelligence agencies. More than 30 employees from OpenAI and Google, including Google DeepMind chief scientist Jeff Dean, filed an amicus brief supporting Anthropic on March 9. Despite the feud, Anthropic continues supplying its models to the military at nominal cost, including use in the ongoing war in Iran. Amodei emphasized the company's commitment to national security while pursuing legal resolution.

Was die Leute sagen

X discussions predominantly support Anthropic's lawsuit, viewing the DoD's supply chain risk designation as retaliatory overreach for refusing AI use in mass surveillance and autonomous weapons. Critics label it an abuse of power against an American firm, while journalists detail the free speech and due process claims. Skeptical voices question enforcement on contractors. Reactions highlight ethical AI boundaries and potential precedents.

Verwandte Artikel

Dramatic illustration of Pentagon designating Anthropic's Claude AI a supply chain risk after military usage dispute.
Bild generiert von KI

Pentagon designates Anthropic a ‘supply chain risk’ after dispute over military use limits for Claude AI

Von KI berichtet Bild generiert von KI Fakten geprüft

The Pentagon has formally notified AI company Anthropic that it is deemed a “supply chain risk,” a rare designation that critics say is typically aimed at adversary-linked technology. The move follows a breakdown in negotiations over whether the U.S. military can use Anthropic’s Claude models for all lawful purposes, versus contractual limits the company says are needed to prevent fully autonomous weapons and mass domestic surveillance.

Following last week's federal ban on its AI tools, Anthropic has resumed negotiations with the US Defense Department to avert a supply chain risk designation. Meanwhile, OpenAI's parallel military agreement is under fire from employees, rivals, and Anthropic CEO Dario Amodei, who accused it of misleading claims in a leaked memo.

Von KI berichtet

Anthropics CEO Dario Amodei erklärte, dass das Unternehmen der Forderung des Pentagons nach Entfernung von Sicherheitsvorkehrungen aus seinen KI-Modellen nicht nachkommen werde, trotz Drohungen mit Ausschluss aus Verteidigungssystemen. Der Streit dreht sich um die Verhinderung der Nutzung der KI in autonomen Waffen und innerer Überwachung. Das Unternehmen, das einen Vertrag über 200 Millionen Dollar mit dem Verteidigungsministerium hat, betont sein Engagement für ethische KI-Nutzung.

Anthropic's Claude AI app has hit the top spot on Apple's App Store free apps chart, overtaking ChatGPT and Gemini, fueled by public support following President Trump's federal ban on the tool over Anthropic's AI safety refusals.

Von KI berichtet

Global investors are questioning the returns on massive tech spending in artificial intelligence. Christopher Wood, from Jefferies, identifies Anthropic as a standout in the evolving AI landscape. The AI boom has boosted US equities, but concerns grow over its sustainability.

Das KI-Tool Claude Cowork von Anthropic hat einen starken Rückgang der Aktien von Infosys, TCS und anderen SaaS-Unternehmen verursacht. Diese Firmen verloren Hunderte Milliarden Dollar an Marktwert. Der Auslöser ist der Aufstieg der KI.

Von KI berichtet

In 2025, AI agents became central to artificial intelligence progress, enabling systems to use tools and act autonomously. From theory to everyday applications, they transformed human interactions with large language models. Yet, they also brought challenges like security risks and regulatory gaps.

 

 

 

Diese Website verwendet Cookies

Wir verwenden Cookies für Analysen, um unsere Website zu verbessern. Lesen Sie unsere Datenschutzrichtlinie für weitere Informationen.
Ablehnen