Courtroom illustration of Anthropic suing the US DoD over AI supply-chain risk label, featuring executives, documents, and Claude AI elements.
Courtroom illustration of Anthropic suing the US DoD over AI supply-chain risk label, featuring executives, documents, and Claude AI elements.
AI에 의해 생성된 이미지

Anthropic sues US defense department over supply chain risk designation

AI에 의해 생성된 이미지

Anthropic has filed a federal lawsuit against the US Department of Defense, challenging its recent label of the AI company as a supply-chain risk. The dispute stems from a contract disagreement over the use of Anthropic's Claude AI for military purposes, including restrictions on mass surveillance and autonomous weapons. The company argues the designation violates free speech and due process rights.

The conflict between Anthropic and the US Department of Defense escalated in late February 2026, when the Pentagon sought broader access to Anthropic's Claude AI model for "all lawful purposes." Anthropic refused to remove safeguards prohibiting its use for mass domestic surveillance or fully autonomous weapons systems without human oversight. On February 26, CEO Dario Amodei stated that powerful AI enables the assembly of scattered data into comprehensive profiles of individuals at massive scale, underscoring the company's concerns.

By February 27, after Anthropic declined to alter its terms, Defense Secretary Pete Hegseth threatened to designate the company a supply-chain risk and cancel its $200 million contract. President Donald Trump then ordered all federal agencies to cease using Anthropic's technology. The Pentagon formalized the designation late last month, prompting Anthropic to file suit on March 9 in federal court. The lawsuit describes the actions as an "unprecedented and unlawful campaign of retaliation," asserting that "the Constitution does not allow the government to wield its enormous power to punish a company for its protected speech."

Pentagon officials maintain the issue is moot, as current laws prohibit such surveillance and the department has no plans for autonomous weapons. However, experts like Hamza Chaudhry of the Future of Life Institute called it a "real governance vacuum" and a wake-up call for Congress to enact clear regulations. Greg Nojeim of the Center for Democracy and Technology noted that AI models are "not reliable enough" for fully autonomous weapons, criticizing the Pentagon for rejecting expert advice.

In response, the Pentagon struck a deal with OpenAI, which included provisions against domestic surveillance of US persons. OpenAI CEO Sam Altman confirmed the tool would not be used by intelligence agencies. More than 30 employees from OpenAI and Google, including Google DeepMind chief scientist Jeff Dean, filed an amicus brief supporting Anthropic on March 9. Despite the feud, Anthropic continues supplying its models to the military at nominal cost, including use in the ongoing war in Iran. Amodei emphasized the company's commitment to national security while pursuing legal resolution.

사람들이 말하는 것

X discussions predominantly support Anthropic's lawsuit, viewing the DoD's supply chain risk designation as retaliatory overreach for refusing AI use in mass surveillance and autonomous weapons. Critics label it an abuse of power against an American firm, while journalists detail the free speech and due process claims. Skeptical voices question enforcement on contractors. Reactions highlight ethical AI boundaries and potential precedents.

관련 기사

Dramatic illustration of Pentagon designating Anthropic's Claude AI a supply chain risk after military usage dispute.
AI에 의해 생성된 이미지

Pentagon designates Anthropic a ‘supply chain risk’ after dispute over military use limits for Claude AI

AI에 의해 보고됨 AI에 의해 생성된 이미지 사실 확인됨

The Pentagon has formally notified AI company Anthropic that it is deemed a “supply chain risk,” a rare designation that critics say is typically aimed at adversary-linked technology. The move follows a breakdown in negotiations over whether the U.S. military can use Anthropic’s Claude models for all lawful purposes, versus contractual limits the company says are needed to prevent fully autonomous weapons and mass domestic surveillance.

Following last week's federal ban on its AI tools, Anthropic has resumed negotiations with the US Defense Department to avert a supply chain risk designation. Meanwhile, OpenAI's parallel military agreement is under fire from employees, rivals, and Anthropic CEO Dario Amodei, who accused it of misleading claims in a leaked memo.

AI에 의해 보고됨

Anthropic's CEO Dario Amodei stated that the company will not comply with the Pentagon's request to remove safeguards from its AI models, despite threats of exclusion from defense systems. The dispute centers on preventing the AI's use in autonomous weapons and domestic surveillance. The firm, which has a $200 million contract with the Department of Defense, emphasizes its commitment to ethical AI use.

Anthropic's Claude AI app has hit the top spot on Apple's App Store free apps chart, overtaking ChatGPT and Gemini, fueled by public support following President Trump's federal ban on the tool over Anthropic's AI safety refusals.

AI에 의해 보고됨

Global investors are questioning the returns on massive tech spending in artificial intelligence. Christopher Wood, from Jefferies, identifies Anthropic as a standout in the evolving AI landscape. The AI boom has boosted US equities, but concerns grow over its sustainability.

Anthropic's Claude Cowork AI tool has caused a sharp decline in stocks of Infosys, TCS, and other SaaS companies. These firms lost hundreds of billions of dollars in market value. The trigger is the rise of AI.

AI에 의해 보고됨

2025년, AI 에이전트는 인공지능 발전의 중심이 되었으며, 시스템이 도구를 사용하고 자율적으로 행동할 수 있게 했다. 이론에서 일상 응용까지, 그것들은 대형 언어 모델과의 인간 상호작용을 변화시켰다. 그러나 보안 위험과 규제 공백 같은 도전도 가져왔다.

 

 

 

이 웹사이트는 쿠키를 사용합니다

사이트를 개선하기 위해 분석을 위한 쿠키를 사용합니다. 자세한 내용은 개인정보 보호 정책을 읽으세요.
거부