Split-scene illustration of Anthropic's renewed Pentagon talks contrasting with backlash against OpenAI's military AI deal.
Split-scene illustration of Anthropic's renewed Pentagon talks contrasting with backlash against OpenAI's military AI deal.
AI에 의해 생성된 이미지

Anthropic resumes Pentagon talks as OpenAI military deal faces backlash

AI에 의해 생성된 이미지

Following last week's federal ban on its AI tools, Anthropic has resumed negotiations with the US Defense Department to avert a supply chain risk designation. Meanwhile, OpenAI's parallel military agreement is under fire from employees, rivals, and Anthropic CEO Dario Amodei, who accused it of misleading claims in a leaked memo.

In a bid to avoid being labeled a supply chain risk—typically reserved for foreign adversaries—Anthropic is back in talks with the Pentagon, reports from the Financial Times and Bloomberg indicated on March 5, 2026. CEO Dario Amodei is negotiating with Under Secretary of Defense for Research and Engineering Emil Michael, after a prior $200 million contract from 2025 collapsed over language prohibiting mass surveillance.

Amodei detailed the breakdown in a memo to staff: The department offered to honor Anthropic's terms if it removed a clause on 'analysis of bulk acquired data'—precisely the surveillance scenario Anthropic sought to block. Anthropic refused, prompting the Pentagon to threaten cancellation and the risk label. President Trump then ordered federal agencies to cease using Anthropic technology on February 28, though a six-month phase-out permitted continued access, including for planning an air strike on Iran.

Amodei lambasted OpenAI's response as 'just straight up lies' in the memo, attributing some of Anthropic's troubles to lacking 'dictator-style praise to Trump,' unlike OpenAI CEO Sam Altman. OpenAI secured its own Defense Department deal shortly after Anthropic's fallout, with Altman claiming on X he advised against the risk designation and suggesting Anthropic should have accepted similar terms. OpenAI later amended its agreement to bar mass surveillance on Americans.

OpenAI staff criticized the deal in an all-hands meeting, pressing Altman for details; he acknowledged internal sloppiness on social media. Previously, OpenAI prohibited military use but allowed Pentagon testing via Microsoft. The controversy boosted Anthropic's Claude to the top of Apple's free apps chart.

Part of the Anthropic–Pentagon AI contract dispute series.

사람들이 말하는 것

X discussions focus on Anthropic resuming talks with the Pentagon to avert a supply chain risk designation after refusing to remove AI safeguards on surveillance and autonomous weapons. Dario Amodei's leaked memo accuses OpenAI's military deal of 'straight-up lies' and 'safety theater,' sparking backlash including user boycotts of ChatGPT and praise for Anthropic's ethics. Sentiments include support for AI safety principles, criticism of OpenAI hypocrisy, neutral reporting, and skepticism toward government pressure.

관련 기사

President Trump signs executive order banning Anthropic AI in federal government amid military dispute, with symbolic AI restriction visuals.
AI에 의해 생성된 이미지

Trump orders federal ban on Anthropic AI for government use

AI에 의해 보고됨 AI에 의해 생성된 이미지

US President Donald Trump has directed all federal agencies to immediately cease using Anthropic's AI tools amid a dispute over military applications. The move follows weeks of clashes between Anthropic and Pentagon officials regarding restrictions on AI for mass surveillance and autonomous weapons. A six-month phase-out period has been announced.

Anthropic's CEO Dario Amodei stated that the company will not comply with the Pentagon's request to remove safeguards from its AI models, despite threats of exclusion from defense systems. The dispute centers on preventing the AI's use in autonomous weapons and domestic surveillance. The firm, which has a $200 million contract with the Department of Defense, emphasizes its commitment to ethical AI use.

AI에 의해 보고됨

Hundreds of employees from Google and OpenAI have signed an open letter in solidarity with Anthropic, urging their companies to resist Pentagon demands for unrestricted military use of AI models. The letter opposes uses involving domestic mass surveillance and autonomous killing without human oversight. This comes amid threats from US Defense Secretary Pete Hegseth to label Anthropic a supply chain risk.

Anthropic has extended its memory capability to the free tier of its Claude AI chatbot, allowing users to reference past conversations. The company also released a tool to import memories from competing chatbots like ChatGPT and Gemini. This update coincides with Claude's surge in popularity amid a dispute with the US Department of Defense.

AI에 의해 보고됨

On February 5, 2026, Anthropic and OpenAI simultaneously launched products shifting users from chatting with AI to managing teams of AI agents. Anthropic introduced Claude Opus 4.6 with agent teams for developers, while OpenAI unveiled Frontier and GPT-5.3-Codex for enterprise workflows. These releases coincide with a $285 billion drop in software stocks amid fears of AI disrupting traditional SaaS vendors.

2025년, AI 에이전트는 인공지능 발전의 중심이 되었으며, 시스템이 도구를 사용하고 자율적으로 행동할 수 있게 했다. 이론에서 일상 응용까지, 그것들은 대형 언어 모델과의 인간 상호작용을 변화시켰다. 그러나 보안 위험과 규제 공백 같은 도전도 가져왔다.

AI에 의해 보고됨

Anthropic's official Git MCP server contained worrying security vulnerabilities that could be chained together for severe impacts. The issues were highlighted in a recent TechRadar report. Details emerged on potential risks to the AI company's infrastructure.

 

 

 

이 웹사이트는 쿠키를 사용합니다

사이트를 개선하기 위해 분석을 위한 쿠키를 사용합니다. 자세한 내용은 개인정보 보호 정책을 읽으세요.
거부