Appeals court denies Anthropic stay amid supply chain risk blacklist fight

In the latest development of the Anthropic supply chain risk controversy, a U.S. federal appeals court on April 9 denied Anthropic's emergency motion to block the Trump administration's blacklisting of its AI technology. The court expedited oral arguments for May 19 but ruled the balance of equities favors the government, marking a setback following a prior district court injunction.

The U.S. Court of Appeals for the District of Columbia Circuit refused to halt the Trump administration's designation of Anthropic as a national security supply-chain risk. A panel of three Republican-appointed judges, including Trump appointees Gregory Katsas and Neomi Rao, recognized potential irreparable harm to Anthropic—such as financial losses and claims of retaliation for First Amendment-protected speech—but found insufficient evidence of chilled speech and prioritized government equities amid military conflict.

The blacklist stems from Anthropic's refusal to allow its Claude AI models for autonomous warfare and mass surveillance of Americans. President Trump directed federal agencies to stop using the technology, and Defense Secretary Pete Hegseth barred military contractors from dealings with the firm. Acting Attorney General Todd Blanche called the ruling a 'resounding victory for military readiness,' emphasizing presidential authority over the Department of War (formerly Defense).

This follows U.S. District Judge Rita Lin's March 27 preliminary injunction in California, which blocked the initial March 4 designation as arbitrary and First Amendment retaliation; the administration is appealing to the 9th Circuit. Anthropic expressed confidence in future court rulings deeming the blacklist unlawful and reiterated its commitment to safe AI. The Computer & Communications Industry Association warned that procedural lapses in such designations could harm U.S. innovation.

Part of the 'Anthropic supply chain risk controversy' series.

Related Articles

Courtroom illustration of Anthropic suing the US DoD over AI supply-chain risk label, featuring executives, documents, and Claude AI elements.
Image generated by AI

Anthropic sues US defense department over supply chain risk designation

Reported by AI Image generated by AI

Anthropic has filed a federal lawsuit against the US Department of Defense, challenging its recent label of the AI company as a supply-chain risk. The dispute stems from a contract disagreement over the use of Anthropic's Claude AI for military purposes, including restrictions on mass surveillance and autonomous weapons. The company argues the designation violates free speech and due process rights.

A federal judge in San Francisco issued a preliminary injunction on March 27, 2026, blocking the Trump administration's designation of AI company Anthropic as a military supply chain risk—a label applied three weeks earlier amid disputes over the firm's limits on its Claude AI models for military uses like autonomous weapons.

Reported by AI Fact checked

The Pentagon has formally notified AI company Anthropic that it is deemed a “supply chain risk,” a rare designation that critics say is typically aimed at adversary-linked technology. The move follows a breakdown in negotiations over whether the U.S. military can use Anthropic’s Claude models for all lawful purposes, versus contractual limits the company says are needed to prevent fully autonomous weapons and mass domestic surveillance.

The Pentagon is considering ending its relationship with AI firm Anthropic due to disagreements over safeguards. Anthropic, the maker of the Claude AI model, has raised concerns about hard limits on fully autonomous weapons and mass domestic surveillance. This stems from the Pentagon's desire to apply AI models in warfighting scenarios, which Anthropic has declined.

Reported by AI

Hundreds of employees from Google and OpenAI have signed an open letter in solidarity with Anthropic, urging their companies to resist Pentagon demands for unrestricted military use of AI models. The letter opposes uses involving domestic mass surveillance and autonomous killing without human oversight. This comes amid threats from US Defense Secretary Pete Hegseth to label Anthropic a supply chain risk.

Anthropic has limited access to its Claude Mythos Preview AI model due to its superior ability to detect and exploit software vulnerabilities, while launching Project Glasswing—a consortium with over 45 tech firms including Apple, Google, and Microsoft—to collaboratively patch flaws and bolster defenses. The announcement follows recent data leaks at the firm.

Reported by AI

Anthropic's recent update to its CoWork platform has led to significant market reactions in the software industry. The U.S. software sector saw a widespread sell-off, losing over $1 trillion in value, according to Fortune. This development highlights investor uncertainty around AI-native workflows and their impact on SaaS stocks.

 

 

 

This website uses cookies

We use cookies for analytics to improve our site. Read our privacy policy for more information.
Decline