Anthropic cannot meet Pentagon's AI safeguards demand, CEO says

Anthropic's CEO Dario Amodei stated that the company will not comply with the Pentagon's request to remove safeguards from its AI models, despite threats of exclusion from defense systems. The dispute centers on preventing the AI's use in autonomous weapons and domestic surveillance. The firm, which has a $200 million contract with the Department of Defense, emphasizes its commitment to ethical AI use.

Anthropic, an AI startup backed by Google and Amazon, is locked in a dispute with the U.S. Department of Defense over safeguards in its AI technology, particularly its model Claude.

On Thursday, CEO Dario Amodei announced that the company cannot accede to the Pentagon's demands, which include removing restrictions that bar the AI from being used to target weapons autonomously or for mass domestic surveillance in the United States.

The Pentagon has a contract with Anthropic worth up to $200 million. However, the department insists on contracting only with AI firms that allow "any lawful use" of their technology, requiring the removal of such safeguards.

Amodei noted that uses like mass surveillance and fully autonomous weapons have never been part of their contracts and should not be included now. He revealed threats from the department to remove Anthropic from its systems, designate it a supply chain risk, and invoke the Defense Production Act to force the changes.

"Regardless, these threats do not change our position: we cannot in good conscience accede to their request," Amodei said.

In response, Pentagon spokesperson Sean Parnell posted on X that the department has no interest in using AI for mass surveillance of Americans or developing autonomous weapons without human involvement. "Here’s what we’re asking: Allow the Pentagon to use Anthropic’s model for all lawful purposes," Parnell said.

The Pentagon did not immediately respond to requests for comment on Anthropic's statement.

Amodei expressed hope that the department would reconsider, given the value of Anthropic's technology to the armed forces, and offered to facilitate a smooth transition if needed.

An Anthropic spokesperson added that the company is ready to continue discussions and is committed to operational continuity for the Department and America's warfighters.

관련 기사

Tense meeting between US Defense Secretary and Anthropic CEO over AI safety policy relaxation and military access.
AI에 의해 생성된 이미지

Pentagon pressures Anthropic to weaken AI safety commitments

AI에 의해 보고됨 AI에 의해 생성된 이미지

US Defense Secretary Pete Hegseth has threatened Anthropic with severe penalties unless the company grants the military unrestricted access to its Claude AI model. The ultimatum came during a meeting with CEO Dario Amodei in Washington on Tuesday, coinciding with Anthropic's announcement to relax its Responsible Scaling Policy. The changes shift from strict safety tripwires to more flexible risk assessments amid competitive pressures.

The Pentagon is considering ending its relationship with AI firm Anthropic due to disagreements over safeguards. Anthropic, the maker of the Claude AI model, has raised concerns about hard limits on fully autonomous weapons and mass domestic surveillance. This stems from the Pentagon's desire to apply AI models in warfighting scenarios, which Anthropic has declined.

AI에 의해 보고됨

Hundreds of employees from Google and OpenAI have signed an open letter in solidarity with Anthropic, urging their companies to resist Pentagon demands for unrestricted military use of AI models. The letter opposes uses involving domestic mass surveillance and autonomous killing without human oversight. This comes amid threats from US Defense Secretary Pete Hegseth to label Anthropic a supply chain risk.

Anthropic's recent update to its CoWork platform has led to significant market reactions in the software industry. The U.S. software sector saw a widespread sell-off, losing over $1 trillion in value, according to Fortune. This development highlights investor uncertainty around AI-native workflows and their impact on SaaS stocks.

AI에 의해 보고됨

A CNET commentary argues that describing AI as having human-like qualities such as souls or confessions misleads the public and erodes trust in the technology. It highlights how companies like OpenAI and Anthropic use such language, which obscures real issues like bias and safety. The piece calls for more precise terminology to foster accurate understanding.

2025년, AI 에이전트는 인공지능 발전의 중심이 되었으며, 시스템이 도구를 사용하고 자율적으로 행동할 수 있게 했다. 이론에서 일상 응용까지, 그것들은 대형 언어 모델과의 인간 상호작용을 변화시켰다. 그러나 보안 위험과 규제 공백 같은 도전도 가져왔다.

AI에 의해 보고됨

Anthropic has launched a legal plugin for its Claude Cowork tool, prompting concerns among dedicated legal AI providers. The plugin offers useful features for contract review and compliance but falls short of replacing specialized platforms. South African firms face additional hurdles due to data protection regulations.

 

 

 

이 웹사이트는 쿠키를 사용합니다

사이트를 개선하기 위해 분석을 위한 쿠키를 사용합니다. 자세한 내용은 개인정보 보호 정책을 읽으세요.
거부