Split-scene illustration of Anthropic's renewed Pentagon talks contrasting with backlash against OpenAI's military AI deal.
Split-scene illustration of Anthropic's renewed Pentagon talks contrasting with backlash against OpenAI's military AI deal.
Image generated by AI

Anthropic resumes Pentagon talks as OpenAI military deal faces backlash

Image generated by AI

Following last week's federal ban on its AI tools, Anthropic has resumed negotiations with the US Defense Department to avert a supply chain risk designation. Meanwhile, OpenAI's parallel military agreement is under fire from employees, rivals, and Anthropic CEO Dario Amodei, who accused it of misleading claims in a leaked memo.

In a bid to avoid being labeled a supply chain risk—typically reserved for foreign adversaries—Anthropic is back in talks with the Pentagon, reports from the Financial Times and Bloomberg indicated on March 5, 2026. CEO Dario Amodei is negotiating with Under Secretary of Defense for Research and Engineering Emil Michael, after a prior $200 million contract from 2025 collapsed over language prohibiting mass surveillance.

Amodei detailed the breakdown in a memo to staff: The department offered to honor Anthropic's terms if it removed a clause on 'analysis of bulk acquired data'—precisely the surveillance scenario Anthropic sought to block. Anthropic refused, prompting the Pentagon to threaten cancellation and the risk label. President Trump then ordered federal agencies to cease using Anthropic technology on February 28, though a six-month phase-out permitted continued access, including for planning an air strike on Iran.

Amodei lambasted OpenAI's response as 'just straight up lies' in the memo, attributing some of Anthropic's troubles to lacking 'dictator-style praise to Trump,' unlike OpenAI CEO Sam Altman. OpenAI secured its own Defense Department deal shortly after Anthropic's fallout, with Altman claiming on X he advised against the risk designation and suggesting Anthropic should have accepted similar terms. OpenAI later amended its agreement to bar mass surveillance on Americans.

OpenAI staff criticized the deal in an all-hands meeting, pressing Altman for details; he acknowledged internal sloppiness on social media. Previously, OpenAI prohibited military use but allowed Pentagon testing via Microsoft. The controversy boosted Anthropic's Claude to the top of Apple's free apps chart.

Part of the Anthropic–Pentagon AI contract dispute series.

What people are saying

X discussions focus on Anthropic resuming talks with the Pentagon to avert a supply chain risk designation after refusing to remove AI safeguards on surveillance and autonomous weapons. Dario Amodei's leaked memo accuses OpenAI's military deal of 'straight-up lies' and 'safety theater,' sparking backlash including user boycotts of ChatGPT and praise for Anthropic's ethics. Sentiments include support for AI safety principles, criticism of OpenAI hypocrisy, neutral reporting, and skepticism toward government pressure.

Related Articles

Courtroom illustration of Anthropic suing the US DoD over AI supply-chain risk label, featuring executives, documents, and Claude AI elements.
Image generated by AI

Anthropic sues US defense department over supply chain risk designation

Reported by AI Image generated by AI

Anthropic has filed a federal lawsuit against the US Department of Defense, challenging its recent label of the AI company as a supply-chain risk. The dispute stems from a contract disagreement over the use of Anthropic's Claude AI for military purposes, including restrictions on mass surveillance and autonomous weapons. The company argues the designation violates free speech and due process rights.

Anthropic's CEO Dario Amodei stated that the company will not comply with the Pentagon's request to remove safeguards from its AI models, despite threats of exclusion from defense systems. The dispute centers on preventing the AI's use in autonomous weapons and domestic surveillance. The firm, which has a $200 million contract with the Department of Defense, emphasizes its commitment to ethical AI use.

Reported by AI

US President Donald Trump has directed all federal agencies to immediately cease using Anthropic's AI tools amid a dispute over military applications. The move follows weeks of clashes between Anthropic and Pentagon officials regarding restrictions on AI for mass surveillance and autonomous weapons. A six-month phase-out period has been announced.

US President Donald Trump stated on Friday that he is directing government agencies to stop working with Anthropic. The Pentagon plans to declare the startup a supply-chain risk, marking a major blow following a showdown over technology guardrails. Agencies using the company's products will have a six-month phase-out period.

Reported by AI

Anthropic has launched the Anthropic Institute, a new research initiative, and opened its first Public Policy office in Washington, DC, this spring. These steps follow the AI company's recent federal lawsuit against the US government over a Defense Department supply chain risk designation tied to a contract dispute.

Anthropic has limited access to its Claude Mythos Preview AI model due to its superior ability to detect and exploit software vulnerabilities, while launching Project Glasswing—a consortium with over 45 tech firms including Apple, Google, and Microsoft—to collaboratively patch flaws and bolster defenses. The announcement follows recent data leaks at the firm.

Reported by AI

Caitlin Kalinowski, OpenAI's head of robotics, has resigned, citing insufficient deliberation on ethical guardrails in the company's recent deal with the Department of Defense. She expressed concerns over potential surveillance and autonomous weapons in a post on X. OpenAI acknowledged her departure and reiterated its commitments against domestic surveillance and lethal autonomous systems.

 

 

 

This website uses cookies

We use cookies for analytics to improve our site. Read our privacy policy for more information.
Decline