Pentagon may sever ties with Anthropic over AI safeguards

The Pentagon is considering ending its relationship with AI firm Anthropic due to disagreements over safeguards. Anthropic, the maker of the Claude AI model, has raised concerns about hard limits on fully autonomous weapons and mass domestic surveillance. This stems from the Pentagon's desire to apply AI models in warfighting scenarios, which Anthropic has declined.

The potential rift between the Pentagon and Anthropic highlights tensions in AI development for military purposes. According to reports, the U.S. Department of Defense seeks to integrate advanced AI models into warfighting applications. However, Anthropic has expressed firm reservations, emphasizing the need for strict boundaries around the use of fully autonomous weapons and extensive domestic surveillance programs.

Anthropic's position underscores its commitment to ethical AI deployment, refusing to support military applications that could cross these lines. The company, known for its Claude language model, prioritizes safety measures that the Pentagon's plans appear to challenge. No official confirmation of the severance has been issued, but the disagreement points to broader debates on AI governance in defense contexts.

This development, reported on February 16, 2026, reflects ongoing scrutiny of how AI technologies are regulated in sensitive sectors.

Related Articles

Illustrative photo of Pentagon challenging Anthropic's limits on Claude AI for military use during strained contract talks.
Image generated by AI

Pentagon disputes Anthropic limits on Claude’s military use as contract talks strain

Reported by AI Image generated by AI Fact checked

After Anthropic CEO Dario Amodei said in late February that the company would not allow its Claude model to be used for mass domestic surveillance or fully autonomous weapons, senior Pentagon officials said they have no intention of using AI for domestic surveillance and insist that private firms cannot set binding limits on how the U.S. military employs AI tools.

US Defense Secretary Pete Hegseth has threatened Anthropic with severe penalties unless the company grants the military unrestricted access to its Claude AI model. The ultimatum came during a meeting with CEO Dario Amodei in Washington on Tuesday, coinciding with Anthropic's announcement to relax its Responsible Scaling Policy. The changes shift from strict safety tripwires to more flexible risk assessments amid competitive pressures.

Reported by AI Fact checked

The Pentagon has formally notified AI company Anthropic that it is deemed a “supply chain risk,” a rare designation that critics say is typically aimed at adversary-linked technology. The move follows a breakdown in negotiations over whether the U.S. military can use Anthropic’s Claude models for all lawful purposes, versus contractual limits the company says are needed to prevent fully autonomous weapons and mass domestic surveillance.

US President Donald Trump stated on Friday that he is directing government agencies to stop working with Anthropic. The Pentagon plans to declare the startup a supply-chain risk, marking a major blow following a showdown over technology guardrails. Agencies using the company's products will have a six-month phase-out period.

Reported by AI

Anthropic has limited access to its Claude Mythos Preview AI model due to its superior ability to detect and exploit software vulnerabilities, while launching Project Glasswing—a consortium with over 45 tech firms including Apple, Google, and Microsoft—to collaboratively patch flaws and bolster defenses. The announcement follows recent data leaks at the firm.

Anthropic announced on Wednesday the launch of Claude Managed Agents, a new product aimed at simplifying the creation and deployment of AI agents for businesses. The tool provides developers with ready-made infrastructure to build autonomous AI systems. It addresses a key barrier in automating work tasks amid the company's rapid enterprise growth.

Reported by AI

The UK Department for Science, Innovation and Technology has proposed that Anthropic expand its London office and pursue a potential dual stock listing, according to a Financial Times report. This effort follows a dispute between the San Francisco-based AI company and the US Department of Defense. Officials aim to attract Anthropic amid ongoing tensions.

 

 

 

This website uses cookies

We use cookies for analytics to improve our site. Read our privacy policy for more information.
Decline