Anthropic cannot meet Pentagon's AI safeguards demand, CEO says

Anthropic's CEO Dario Amodei stated that the company will not comply with the Pentagon's request to remove safeguards from its AI models, despite threats of exclusion from defense systems. The dispute centers on preventing the AI's use in autonomous weapons and domestic surveillance. The firm, which has a $200 million contract with the Department of Defense, emphasizes its commitment to ethical AI use.

Anthropic, an AI startup backed by Google and Amazon, is locked in a dispute with the U.S. Department of Defense over safeguards in its AI technology, particularly its model Claude.

On Thursday, CEO Dario Amodei announced that the company cannot accede to the Pentagon's demands, which include removing restrictions that bar the AI from being used to target weapons autonomously or for mass domestic surveillance in the United States.

The Pentagon has a contract with Anthropic worth up to $200 million. However, the department insists on contracting only with AI firms that allow "any lawful use" of their technology, requiring the removal of such safeguards.

Amodei noted that uses like mass surveillance and fully autonomous weapons have never been part of their contracts and should not be included now. He revealed threats from the department to remove Anthropic from its systems, designate it a supply chain risk, and invoke the Defense Production Act to force the changes.

"Regardless, these threats do not change our position: we cannot in good conscience accede to their request," Amodei said.

In response, Pentagon spokesperson Sean Parnell posted on X that the department has no interest in using AI for mass surveillance of Americans or developing autonomous weapons without human involvement. "Here’s what we’re asking: Allow the Pentagon to use Anthropic’s model for all lawful purposes," Parnell said.

The Pentagon did not immediately respond to requests for comment on Anthropic's statement.

Amodei expressed hope that the department would reconsider, given the value of Anthropic's technology to the armed forces, and offered to facilitate a smooth transition if needed.

An Anthropic spokesperson added that the company is ready to continue discussions and is committed to operational continuity for the Department and America's warfighters.

Mga Kaugnay na Artikulo

Tense meeting between US Defense Secretary and Anthropic CEO over AI safety policy relaxation and military access.
Larawang ginawa ng AI

Pentagon pressures Anthropic to weaken AI safety commitments

Iniulat ng AI Larawang ginawa ng AI

US Defense Secretary Pete Hegseth has threatened Anthropic with severe penalties unless the company grants the military unrestricted access to its Claude AI model. The ultimatum came during a meeting with CEO Dario Amodei in Washington on Tuesday, coinciding with Anthropic's announcement to relax its Responsible Scaling Policy. The changes shift from strict safety tripwires to more flexible risk assessments amid competitive pressures.

The Pentagon is considering ending its relationship with AI firm Anthropic due to disagreements over safeguards. Anthropic, the maker of the Claude AI model, has raised concerns about hard limits on fully autonomous weapons and mass domestic surveillance. This stems from the Pentagon's desire to apply AI models in warfighting scenarios, which Anthropic has declined.

Iniulat ng AI

Hundreds of employees from Google and OpenAI have signed an open letter in solidarity with Anthropic, urging their companies to resist Pentagon demands for unrestricted military use of AI models. The letter opposes uses involving domestic mass surveillance and autonomous killing without human oversight. This comes amid threats from US Defense Secretary Pete Hegseth to label Anthropic a supply chain risk.

Anthropic's recent update to its CoWork platform has led to significant market reactions in the software industry. The U.S. software sector saw a widespread sell-off, losing over $1 trillion in value, according to Fortune. This development highlights investor uncertainty around AI-native workflows and their impact on SaaS stocks.

Iniulat ng AI

A CNET commentary argues that describing AI as having human-like qualities such as souls or confessions misleads the public and erodes trust in the technology. It highlights how companies like OpenAI and Anthropic use such language, which obscures real issues like bias and safety. The piece calls for more precise terminology to foster accurate understanding.

In 2025, AI agents became central to artificial intelligence progress, enabling systems to use tools and act autonomously. From theory to everyday applications, they transformed human interactions with large language models. Yet, they also brought challenges like security risks and regulatory gaps.

Iniulat ng AI

Anthropic has launched a legal plugin for its Claude Cowork tool, prompting concerns among dedicated legal AI providers. The plugin offers useful features for contract review and compliance but falls short of replacing specialized platforms. South African firms face additional hurdles due to data protection regulations.

 

 

 

Gumagamit ng cookies ang website na ito

Gumagamit kami ng cookies para sa analytics upang mapabuti ang aming site. Basahin ang aming patakaran sa privacy para sa higit pang impormasyon.
Tanggihan