Federal judge blocks Pentagon's 'supply chain risk' designation for Anthropic

A federal judge in San Francisco issued a preliminary injunction on March 27, 2026, blocking the Trump administration's designation of AI company Anthropic as a military supply chain risk—a label applied three weeks earlier amid disputes over the firm's limits on its Claude AI models for military uses like autonomous weapons.

Following the Pentagon's March 4 designation of Anthropic as a "supply chain risk"—stemming from failed negotiations over contractual restrictions on Claude AI for fully autonomous weapons and mass surveillance—U.S. District Judge Rita Lin ruled the action arbitrary, capricious, and "classic First Amendment retaliation."

The designation would have restricted government contracts with the Silicon Valley AI firm, which emphasizes safety guardrails. In her 42-page order, Lin halted it pending further review.

Under Secretary of War Emil Michael criticized the ruling on social media as containing factual errors, rushed amid conflict, and undermining the president's Commander in Chief role, calling it "a disgrace." Secretary of War Pete Hegseth had previously argued vendors cannot dictate military use of technology.

Anthropic CEO Dario Amodei had indicated plans to challenge the designation legally. Judge Lin has prior experience in related cases, such as blocking UCLA funding cuts over antisemitism concerns.

The ruling underscores ongoing tensions between AI firms' ethical limits and national security demands. This is part of the 'Anthropic supply chain risk controversy' series.

Related Articles

Dramatic illustration of Pentagon designating Anthropic's Claude AI a supply chain risk after military usage dispute.
Image generated by AI

Pentagon designates Anthropic a ‘supply chain risk’ after dispute over military use limits for Claude AI

Reported by AI Image generated by AI Fact checked

The Pentagon has formally notified AI company Anthropic that it is deemed a “supply chain risk,” a rare designation that critics say is typically aimed at adversary-linked technology. The move follows a breakdown in negotiations over whether the U.S. military can use Anthropic’s Claude models for all lawful purposes, versus contractual limits the company says are needed to prevent fully autonomous weapons and mass domestic surveillance.

Anthropic has filed a federal lawsuit against the US Department of Defense, challenging its recent label of the AI company as a supply-chain risk. The dispute stems from a contract disagreement over the use of Anthropic's Claude AI for military purposes, including restrictions on mass surveillance and autonomous weapons. The company argues the designation violates free speech and due process rights.

Reported by AI

US President Donald Trump has directed federal agencies to immediately cease using Anthropic's AI technology. The order follows a dispute with the Pentagon, where the company refused unconditional military use of its Claude models. Anthropic has vowed to challenge the Pentagon's ban in court.

Hundreds of employees from Google and OpenAI have signed an open letter in solidarity with Anthropic, urging their companies to resist Pentagon demands for unrestricted military use of AI models. The letter opposes uses involving domestic mass surveillance and autonomous killing without human oversight. This comes amid threats from US Defense Secretary Pete Hegseth to label Anthropic a supply chain risk.

Reported by AI

Anthropic has launched the Anthropic Institute, a new research initiative, and opened its first Public Policy office in Washington, DC, this spring. These steps follow the AI company's recent federal lawsuit against the US government over a Defense Department supply chain risk designation tied to a contract dispute.

Elon Musk's xAI lost its bid for a preliminary injunction to block California's Assembly Bill 2013, which requires AI firms to disclose training data details. US District Judge Jesus Bernal ruled that xAI failed to demonstrate the law reveals trade secrets or causes irreparable harm. The company must now comply with the law, effective since January, while the lawsuit proceeds.

Reported by AI

The Trump administration has released a National AI Legislative Framework to unify federal AI rules, address national security concerns, and counter Beijing's growing dominance in the sector. It argues that state laws should not govern areas better suited to the federal government or contradict US strategy for global AI leadership. The White House looks forward to working with Congress to turn it into legislation.

 

 

 

This website uses cookies

We use cookies for analytics to improve our site. Read our privacy policy for more information.
Decline