Courtroom illustration of Anthropic suing the US DoD over AI supply-chain risk label, featuring executives, documents, and Claude AI elements.
Courtroom illustration of Anthropic suing the US DoD over AI supply-chain risk label, featuring executives, documents, and Claude AI elements.
Изображение, созданное ИИ

Anthropic sues US defense department over supply chain risk designation

Изображение, созданное ИИ

Anthropic has filed a federal lawsuit against the US Department of Defense, challenging its recent label of the AI company as a supply-chain risk. The dispute stems from a contract disagreement over the use of Anthropic's Claude AI for military purposes, including restrictions on mass surveillance and autonomous weapons. The company argues the designation violates free speech and due process rights.

The conflict between Anthropic and the US Department of Defense escalated in late February 2026, when the Pentagon sought broader access to Anthropic's Claude AI model for "all lawful purposes." Anthropic refused to remove safeguards prohibiting its use for mass domestic surveillance or fully autonomous weapons systems without human oversight. On February 26, CEO Dario Amodei stated that powerful AI enables the assembly of scattered data into comprehensive profiles of individuals at massive scale, underscoring the company's concerns.

By February 27, after Anthropic declined to alter its terms, Defense Secretary Pete Hegseth threatened to designate the company a supply-chain risk and cancel its $200 million contract. President Donald Trump then ordered all federal agencies to cease using Anthropic's technology. The Pentagon formalized the designation late last month, prompting Anthropic to file suit on March 9 in federal court. The lawsuit describes the actions as an "unprecedented and unlawful campaign of retaliation," asserting that "the Constitution does not allow the government to wield its enormous power to punish a company for its protected speech."

Pentagon officials maintain the issue is moot, as current laws prohibit such surveillance and the department has no plans for autonomous weapons. However, experts like Hamza Chaudhry of the Future of Life Institute called it a "real governance vacuum" and a wake-up call for Congress to enact clear regulations. Greg Nojeim of the Center for Democracy and Technology noted that AI models are "not reliable enough" for fully autonomous weapons, criticizing the Pentagon for rejecting expert advice.

In response, the Pentagon struck a deal with OpenAI, which included provisions against domestic surveillance of US persons. OpenAI CEO Sam Altman confirmed the tool would not be used by intelligence agencies. More than 30 employees from OpenAI and Google, including Google DeepMind chief scientist Jeff Dean, filed an amicus brief supporting Anthropic on March 9. Despite the feud, Anthropic continues supplying its models to the military at nominal cost, including use in the ongoing war in Iran. Amodei emphasized the company's commitment to national security while pursuing legal resolution.

Что говорят люди

X discussions predominantly support Anthropic's lawsuit, viewing the DoD's supply chain risk designation as retaliatory overreach for refusing AI use in mass surveillance and autonomous weapons. Critics label it an abuse of power against an American firm, while journalists detail the free speech and due process claims. Skeptical voices question enforcement on contractors. Reactions highlight ethical AI boundaries and potential precedents.

Связанные статьи

Dramatic illustration of Pentagon designating Anthropic's Claude AI a supply chain risk after military usage dispute.
Изображение, созданное ИИ

Pentagon designates Anthropic a ‘supply chain risk’ after dispute over military use limits for Claude AI

Сообщено ИИ Изображение, созданное ИИ Проверено фактами

The Pentagon has formally notified AI company Anthropic that it is deemed a “supply chain risk,” a rare designation that critics say is typically aimed at adversary-linked technology. The move follows a breakdown in negotiations over whether the U.S. military can use Anthropic’s Claude models for all lawful purposes, versus contractual limits the company says are needed to prevent fully autonomous weapons and mass domestic surveillance.

Following last week's federal ban on its AI tools, Anthropic has resumed negotiations with the US Defense Department to avert a supply chain risk designation. Meanwhile, OpenAI's parallel military agreement is under fire from employees, rivals, and Anthropic CEO Dario Amodei, who accused it of misleading claims in a leaked memo.

Сообщено ИИ

Anthropic's CEO Dario Amodei stated that the company will not comply with the Pentagon's request to remove safeguards from its AI models, despite threats of exclusion from defense systems. The dispute centers on preventing the AI's use in autonomous weapons and domestic surveillance. The firm, which has a $200 million contract with the Department of Defense, emphasizes its commitment to ethical AI use.

Anthropic's Claude AI app has hit the top spot on Apple's App Store free apps chart, overtaking ChatGPT and Gemini, fueled by public support following President Trump's federal ban on the tool over Anthropic's AI safety refusals.

Сообщено ИИ

Global investors are questioning the returns on massive tech spending in artificial intelligence. Christopher Wood, from Jefferies, identifies Anthropic as a standout in the evolving AI landscape. The AI boom has boosted US equities, but concerns grow over its sustainability.

Anthropic's Claude Cowork AI tool has caused a sharp decline in stocks of Infosys, TCS, and other SaaS companies. These firms lost hundreds of billions of dollars in market value. The trigger is the rise of AI.

Сообщено ИИ

In 2025, AI agents became central to artificial intelligence progress, enabling systems to use tools and act autonomously. From theory to everyday applications, they transformed human interactions with large language models. Yet, they also brought challenges like security risks and regulatory gaps.

 

 

 

Этот сайт использует куки

Мы используем куки для анализа, чтобы улучшить наш сайт. Прочитайте нашу политику конфиденциальности для дополнительной информации.
Отклонить