Dramatic illustration of Pentagon designating Anthropic's Claude AI a supply chain risk after military usage dispute.
Dramatic illustration of Pentagon designating Anthropic's Claude AI a supply chain risk after military usage dispute.
AI에 의해 생성된 이미지

Pentagon designates Anthropic a ‘supply chain risk’ after dispute over military use limits for Claude AI

AI에 의해 생성된 이미지
사실 확인됨

The Pentagon has formally notified AI company Anthropic that it is deemed a “supply chain risk,” a rare designation that critics say is typically aimed at adversary-linked technology. The move follows a breakdown in negotiations over whether the U.S. military can use Anthropic’s Claude models for all lawful purposes, versus contractual limits the company says are needed to prevent fully autonomous weapons and mass domestic surveillance.

Negotiations between the Pentagon and Anthropic escalated in recent weeks as defense officials sought contract terms allowing the military to use Anthropic’s AI models for all lawful purposes.

According to The Daily Wire, Anthropic was willing to continue providing access to its models but insisted on two carve-outs: barring use in fully autonomous weapons systems and barring use for mass domestic surveillance. The outlet reported that the Biden administration accepted those terms in a 2024 contract, but the Trump administration moved to reopen the issue.

A senior Pentagon technology official, Emil Michael—identified by The Daily Wire as the Trump administration’s undersecretary overseeing the dispute—criticized what he said were constraints embedded in prior agreements. “I looked at the contracts and was like, holy cow. You can’t use them to plan a kinetic strike. You can’t use their AI model to move a satellite,” he said, according to The Daily Wire. Michael added that he wanted “terms of service” he viewed as compatible with the department’s mission.

The Daily Wire also reported that Anthropic proposed limited exceptions—such as use in planning a drone swarm or responding to a Chinese hypersonic missile—but Michael said those carve-outs were not sufficient. He also raised concerns that policy-based restrictions could create operational risk if a provider were to cut off service during a mission.

The dispute culminated when the Pentagon said it had “officially informed Anthropic leadership the company and its products are deemed a supply chain risk, effective immediately,” according to an Associated Press report citing a Pentagon statement.

Anthropic CEO Dario Amodei said the company does not believe the designation is legally sound and plans to challenge it in court, the AP reported. In a statement posted by Anthropic on March 5, 2026, Amodei said the company received a letter on March 4 confirming the designation and argued the action’s practical scope is narrow under the cited statute, applying only to Claude use “as a direct part of” Department of War contracts—not to all customer usage.

Defense Secretary Pete Hegseth has publicly argued that vendors should not be able to restrict the lawful use of technology by the military, a view echoed in the Pentagon’s statement to the AP that the military “will not allow a vendor to insert itself into the chain of command by restricting the lawful use of a critical capability and put our warfighters at risk.”

Separately, The Daily Wire reported that Trump administration AI adviser David Sacks criticized what he described as Biden-era ties between AI policy staff and Anthropic, and named former Biden officials Elizabeth Kelly and Benjamin Merkel as now working at the company. The Daily Wire also reported Anthropic said it had appointed former Trump administration official Chris Liddell to its board.

The broader legal and practical impact of the “supply chain risk” designation remains contested. Legal analysts and critics have argued the authority being invoked is narrower than some public claims about an across-the-board ban on contractors doing any business with Anthropic, while Anthropic has said the Pentagon’s letter reflects a limited application tied to specific defense contracts.

In reporting on the dispute, the AP said the confrontation centers on Anthropic’s insistence that its technology not be used for mass surveillance of Americans or fully autonomous weapons—guardrails that the company argues are necessary even as it says operational decision-making should remain with the military.

사람들이 말하는 것

Discussions on X reveal polarized reactions to the Pentagon's designation of Anthropic as a supply chain risk. Supporters praise Anthropic's refusal to allow Claude AI for autonomous weapons and mass surveillance, viewing it as an ethical stand against military overreach. Critics argue it undermines U.S. national security and AI competitiveness, benefiting rivals like xAI and OpenAI. Neutral posts report the developments, while some express skepticism about government motives and potential legal battles.

관련 기사

Courtroom illustration of Anthropic suing the US DoD over AI supply-chain risk label, featuring executives, documents, and Claude AI elements.
AI에 의해 생성된 이미지

Anthropic sues US defense department over supply chain risk designation

AI에 의해 보고됨 AI에 의해 생성된 이미지

Anthropic has filed a federal lawsuit against the US Department of Defense, challenging its recent label of the AI company as a supply-chain risk. The dispute stems from a contract disagreement over the use of Anthropic's Claude AI for military purposes, including restrictions on mass surveillance and autonomous weapons. The company argues the designation violates free speech and due process rights.

The Pentagon is considering ending its relationship with AI firm Anthropic due to disagreements over safeguards. Anthropic, the maker of the Claude AI model, has raised concerns about hard limits on fully autonomous weapons and mass domestic surveillance. This stems from the Pentagon's desire to apply AI models in warfighting scenarios, which Anthropic has declined.

AI에 의해 보고됨

US President Donald Trump has directed federal agencies to immediately cease using Anthropic's AI technology. The order follows a dispute with the Pentagon, where the company refused unconditional military use of its Claude models. Anthropic has vowed to challenge the Pentagon's ban in court.

Anthropic's Claude AI app has hit the top spot on Apple's App Store free apps chart, overtaking ChatGPT and Gemini, fueled by public support following President Trump's federal ban on the tool over Anthropic's AI safety refusals.

AI에 의해 보고됨

Global investors are questioning the returns on massive tech spending in artificial intelligence. Christopher Wood, from Jefferies, identifies Anthropic as a standout in the evolving AI landscape. The AI boom has boosted US equities, but concerns grow over its sustainability.

Anthropic has announced that its AI chatbot Claude will remain free of advertisements, contrasting sharply with rival OpenAI's recent decision to test ads in ChatGPT. The company launched a Super Bowl ad campaign mocking AI assistants that interrupt conversations with product pitches. This move highlights growing tensions in the competitive AI landscape.

AI에 의해 보고됨

Anthropic's Claude Cowork AI tool has caused a sharp decline in stocks of Infosys, TCS, and other SaaS companies. These firms lost hundreds of billions of dollars in market value. The trigger is the rise of AI.

 

 

 

이 웹사이트는 쿠키를 사용합니다

사이트를 개선하기 위해 분석을 위한 쿠키를 사용합니다. 자세한 내용은 개인정보 보호 정책을 읽으세요.
거부