Illustrative photo of Pentagon challenging Anthropic's limits on Claude AI for military use during strained contract talks.
Illustrative photo of Pentagon challenging Anthropic's limits on Claude AI for military use during strained contract talks.
Изображение, созданное ИИ

Pentagon disputes Anthropic limits on Claude’s military use as contract talks strain

Изображение, созданное ИИ
Проверено фактами

After Anthropic CEO Dario Amodei said in late February that the company would not allow its Claude model to be used for mass domestic surveillance or fully autonomous weapons, senior Pentagon officials said they have no intention of using AI for domestic surveillance and insist that private firms cannot set binding limits on how the U.S. military employs AI tools.

Last July, the Pentagon’s chief digital and artificial intelligence officer, Doug Matty, announced contract awards of up to $200 million each to four tech companies—Anthropic, Google, OpenAI, and xAI—to provide advanced AI models for Defense Department missions. Matty said the department intended to speed adoption of commercial AI for “Joint mission essential tasks” in the “warfighting domain,” but the Pentagon released few operational details, citing national security.

The relatively opaque awards drew fresh attention at the end of February, when Anthropic said it was insisting on limits for Claude in a “narrow set of cases.” In a Feb. 26 statement, Amodei said he strongly supported using AI to help defend the United States and other democracies, but argued that some applications could undermine democratic values—including “mass domestic surveillance” and “fully autonomous weapons,” which he described as self-guided combat drones.

Senior Defense Department officials responded by pushing back on both the premise and the company’s leverage. According to reporting cited by The Nation, Pentagon officials said they do not intend to use AI for domestic surveillance and that unmanned weapons systems will remain under human oversight. But they also argued that contractors should not be able to impose their own civil-liberties conditions on Pentagon operations. Emil Michael, the undersecretary of defense for research and engineering, was quoted as saying: “We won’t have any BigTech company decide Americans’ civil liberties.”

The Nation reported that, during negotiations, Michael also raised a separate question about whether Anthropic would oppose the use of Claude in nuclear-related missions such as missile defense, and that Amodei did not object to that use.

The dispute has highlighted a broader tension between the Pentagon’s push to integrate generative AI into intelligence, targeting and weapons development—and the guardrails AI companies say they need to prevent misuse. The Nation pointed to longstanding Defense Department efforts such as Project Maven, which began by using AI to help analyze drone video for potential targets, and DARPA’s Collaborative Operations in Denied Environment (CODE) initiative, which has worked on autonomy for groups of drones operating under preset rules.

Official Pentagon policy on autonomy is outlined in DoD Directive 3000.09, which states that autonomous and semi-autonomous weapons should be designed so commanders and operators can exercise “appropriate levels of human judgment over the use of force.” Critics have argued that the policy’s flexibility still leaves room for autonomy that could significantly reduce real-time human control.

As AI becomes more integrated into military planning and operations, the Anthropic-Pentagon standoff underscores an unresolved question at the center of the U.S. military’s AI expansion: how to reconcile rapid adoption of commercial systems with demands for enforceable limits on domestic surveillance and the delegation of lethal force to machines.

Что говорят люди

X discussions reveal a divide on the Pentagon-Anthropic dispute over Claude AI limits. Supporters of Anthropic commend their ethical stance against mass surveillance and autonomous weapons, viewing the Pentagon's blacklisting as overreach. Critics argue private firms cannot impose restrictions on military use and that simple contract terms suffice. Neutral posts detail the standoff, deadlines, and legal escalations, contrasting with OpenAI's compliance. High-engagement accounts from journalists and analysts highlight national security vs. AI safety tensions.

Связанные статьи

Dramatic illustration of Pentagon designating Anthropic's Claude AI a supply chain risk after military usage dispute.
Изображение, созданное ИИ

Pentagon designates Anthropic a ‘supply chain risk’ after dispute over military use limits for Claude AI

Сообщено ИИ Изображение, созданное ИИ Проверено фактами

The Pentagon has formally notified AI company Anthropic that it is deemed a “supply chain risk,” a rare designation that critics say is typically aimed at adversary-linked technology. The move follows a breakdown in negotiations over whether the U.S. military can use Anthropic’s Claude models for all lawful purposes, versus contractual limits the company says are needed to prevent fully autonomous weapons and mass domestic surveillance.

The Pentagon is considering ending its relationship with AI firm Anthropic due to disagreements over safeguards. Anthropic, the maker of the Claude AI model, has raised concerns about hard limits on fully autonomous weapons and mass domestic surveillance. This stems from the Pentagon's desire to apply AI models in warfighting scenarios, which Anthropic has declined.

Сообщено ИИ

Anthropic's CEO Dario Amodei stated that the company will not comply with the Pentagon's request to remove safeguards from its AI models, despite threats of exclusion from defense systems. The dispute centers on preventing the AI's use in autonomous weapons and domestic surveillance. The firm, which has a $200 million contract with the Department of Defense, emphasizes its commitment to ethical AI use.

US President Donald Trump stated on Friday that he is directing government agencies to stop working with Anthropic. The Pentagon plans to declare the startup a supply-chain risk, marking a major blow following a showdown over technology guardrails. Agencies using the company's products will have a six-month phase-out period.

Сообщено ИИ

Anthropic's Claude AI app has hit the top spot on Apple's App Store free apps chart, overtaking ChatGPT and Gemini, fueled by public support following President Trump's federal ban on the tool over Anthropic's AI safety refusals.

Anthropic has launched a legal plugin for its Claude Cowork tool, prompting concerns among dedicated legal AI providers. The plugin offers useful features for contract review and compliance but falls short of replacing specialized platforms. South African firms face additional hurdles due to data protection regulations.

Сообщено ИИ

A CNET commentary argues that describing AI as having human-like qualities such as souls or confessions misleads the public and erodes trust in the technology. It highlights how companies like OpenAI and Anthropic use such language, which obscures real issues like bias and safety. The piece calls for more precise terminology to foster accurate understanding.

 

 

 

Этот сайт использует куки

Мы используем куки для анализа, чтобы улучшить наш сайт. Прочитайте нашу политику конфиденциальности для дополнительной информации.
Отклонить