A federal judge in San Francisco issued a preliminary injunction on March 27, 2026, blocking the Trump administration's designation of AI company Anthropic as a military supply chain risk—a label applied three weeks earlier amid disputes over the firm's limits on its Claude AI models for military uses like autonomous weapons.
Following the Pentagon's March 4 designation of Anthropic as a "supply chain risk"—stemming from failed negotiations over contractual restrictions on Claude AI for fully autonomous weapons and mass surveillance—U.S. District Judge Rita Lin ruled the action arbitrary, capricious, and "classic First Amendment retaliation."
The designation would have restricted government contracts with the Silicon Valley AI firm, which emphasizes safety guardrails. In her 42-page order, Lin halted it pending further review.
Under Secretary of War Emil Michael criticized the ruling on social media as containing factual errors, rushed amid conflict, and undermining the president's Commander in Chief role, calling it "a disgrace." Secretary of War Pete Hegseth had previously argued vendors cannot dictate military use of technology.
Anthropic CEO Dario Amodei had indicated plans to challenge the designation legally. Judge Lin has prior experience in related cases, such as blocking UCLA funding cuts over antisemitism concerns.
The ruling underscores ongoing tensions between AI firms' ethical limits and national security demands. This is part of the 'Anthropic supply chain risk controversy' series.