US Defense Secretary Pete Hegseth has threatened Anthropic with severe penalties unless the company grants the military unrestricted access to its Claude AI model. The ultimatum came during a meeting with CEO Dario Amodei in Washington on Tuesday, coinciding with Anthropic's announcement to relax its Responsible Scaling Policy. The changes shift from strict safety tripwires to more flexible risk assessments amid competitive pressures.
On February 25, 2026, US Defense Secretary Pete Hegseth summoned Anthropic CEO Dario Amodei to Washington for discussions on the company's AI usage policies. Hegseth demanded that Anthropic allow its Claude model to be used in all lawful military applications, including potentially sensitive areas like mass surveillance and lethal missions without direct human oversight. Anthropic has expressed concerns about the reliability of current AI models for such uses, offering instead to apply its standard usage policies to government contracts while prohibiting applications like autonomous weapons or domestic surveillance without human involvement.
Hegseth set a deadline of Friday, February 27, warning that failure to comply could lead to invocation of the Defense Production Act, designation of Anthropic as a supply chain risk, and exclusion from Department of Defense contracts. The company holds a $200 million contract with the Pentagon, and Claude has been utilized in classified operations, such as the January 2026 capture of Venezuelan leader Nicolás Maduro in collaboration with Palantir.
The same day, Anthropic announced modifications to its Responsible Scaling Policy, moving away from hard commitments to halt model training unless safety could be guaranteed in advance. The updated policy adopts a relative approach, emphasizing risk reports and frontier safety roadmaps to provide transparency. Anthropic cited a 'collective action problem' in the competitive AI landscape, noting that unilateral pauses would disadvantage responsible developers while others advance without mitigations.
Chief science officer Jared Kaplan stated, 'We felt that it wouldn't actually help anyone for us to stop training AI models,' highlighting the rapid pace of industry progress. Chris Painter of METR described the shift as understandable but warned of a potential 'frog-boiling' effect, where flexible safety measures could erode over time. Anthropic maintains it is engaging in good-faith talks to support national security responsibly. The Pentagon is also negotiating with rivals like OpenAI, Google, and xAI to integrate their technologies into military systems.