Anthropic has filed a federal lawsuit against the US Department of Defense, challenging its recent label of the AI company as a supply-chain risk. The dispute stems from a contract disagreement over the use of Anthropic's Claude AI for military purposes, including restrictions on mass surveillance and autonomous weapons. The company argues the designation violates free speech and due process rights.
The conflict between Anthropic and the US Department of Defense escalated in late February 2026, when the Pentagon sought broader access to Anthropic's Claude AI model for "all lawful purposes." Anthropic refused to remove safeguards prohibiting its use for mass domestic surveillance or fully autonomous weapons systems without human oversight. On February 26, CEO Dario Amodei stated that powerful AI enables the assembly of scattered data into comprehensive profiles of individuals at massive scale, underscoring the company's concerns.
By February 27, after Anthropic declined to alter its terms, Defense Secretary Pete Hegseth threatened to designate the company a supply-chain risk and cancel its $200 million contract. President Donald Trump then ordered all federal agencies to cease using Anthropic's technology. The Pentagon formalized the designation late last month, prompting Anthropic to file suit on March 9 in federal court. The lawsuit describes the actions as an "unprecedented and unlawful campaign of retaliation," asserting that "the Constitution does not allow the government to wield its enormous power to punish a company for its protected speech."
Pentagon officials maintain the issue is moot, as current laws prohibit such surveillance and the department has no plans for autonomous weapons. However, experts like Hamza Chaudhry of the Future of Life Institute called it a "real governance vacuum" and a wake-up call for Congress to enact clear regulations. Greg Nojeim of the Center for Democracy and Technology noted that AI models are "not reliable enough" for fully autonomous weapons, criticizing the Pentagon for rejecting expert advice.
In response, the Pentagon struck a deal with OpenAI, which included provisions against domestic surveillance of US persons. OpenAI CEO Sam Altman confirmed the tool would not be used by intelligence agencies. More than 30 employees from OpenAI and Google, including Google DeepMind chief scientist Jeff Dean, filed an amicus brief supporting Anthropic on March 9. Despite the feud, Anthropic continues supplying its models to the military at nominal cost, including use in the ongoing war in Iran. Amodei emphasized the company's commitment to national security while pursuing legal resolution.