The Pentagon has formally notified AI company Anthropic that it is deemed a “supply chain risk,” a rare designation that critics say is typically aimed at adversary-linked technology. The move follows a breakdown in negotiations over whether the U.S. military can use Anthropic’s Claude models for all lawful purposes, versus contractual limits the company says are needed to prevent fully autonomous weapons and mass domestic surveillance.
Negotiations between the Pentagon and Anthropic escalated in recent weeks as defense officials sought contract terms allowing the military to use Anthropic’s AI models for all lawful purposes.
According to The Daily Wire, Anthropic was willing to continue providing access to its models but insisted on two carve-outs: barring use in fully autonomous weapons systems and barring use for mass domestic surveillance. The outlet reported that the Biden administration accepted those terms in a 2024 contract, but the Trump administration moved to reopen the issue.
A senior Pentagon technology official, Emil Michael—identified by The Daily Wire as the Trump administration’s undersecretary overseeing the dispute—criticized what he said were constraints embedded in prior agreements. “I looked at the contracts and was like, holy cow. You can’t use them to plan a kinetic strike. You can’t use their AI model to move a satellite,” he said, according to The Daily Wire. Michael added that he wanted “terms of service” he viewed as compatible with the department’s mission.
The Daily Wire also reported that Anthropic proposed limited exceptions—such as use in planning a drone swarm or responding to a Chinese hypersonic missile—but Michael said those carve-outs were not sufficient. He also raised concerns that policy-based restrictions could create operational risk if a provider were to cut off service during a mission.
The dispute culminated when the Pentagon said it had “officially informed Anthropic leadership the company and its products are deemed a supply chain risk, effective immediately,” according to an Associated Press report citing a Pentagon statement.
Anthropic CEO Dario Amodei said the company does not believe the designation is legally sound and plans to challenge it in court, the AP reported. In a statement posted by Anthropic on March 5, 2026, Amodei said the company received a letter on March 4 confirming the designation and argued the action’s practical scope is narrow under the cited statute, applying only to Claude use “as a direct part of” Department of War contracts—not to all customer usage.
Defense Secretary Pete Hegseth has publicly argued that vendors should not be able to restrict the lawful use of technology by the military, a view echoed in the Pentagon’s statement to the AP that the military “will not allow a vendor to insert itself into the chain of command by restricting the lawful use of a critical capability and put our warfighters at risk.”
Separately, The Daily Wire reported that Trump administration AI adviser David Sacks criticized what he described as Biden-era ties between AI policy staff and Anthropic, and named former Biden officials Elizabeth Kelly and Benjamin Merkel as now working at the company. The Daily Wire also reported Anthropic said it had appointed former Trump administration official Chris Liddell to its board.
The broader legal and practical impact of the “supply chain risk” designation remains contested. Legal analysts and critics have argued the authority being invoked is narrower than some public claims about an across-the-board ban on contractors doing any business with Anthropic, while Anthropic has said the Pentagon’s letter reflects a limited application tied to specific defense contracts.
In reporting on the dispute, the AP said the confrontation centers on Anthropic’s insistence that its technology not be used for mass surveillance of Americans or fully autonomous weapons—guardrails that the company argues are necessary even as it says operational decision-making should remain with the military.