Anthropic cannot meet Pentagon's AI safeguards demand, CEO says

Anthropic's CEO Dario Amodei stated that the company will not comply with the Pentagon's request to remove safeguards from its AI models, despite threats of exclusion from defense systems. The dispute centers on preventing the AI's use in autonomous weapons and domestic surveillance. The firm, which has a $200 million contract with the Department of Defense, emphasizes its commitment to ethical AI use.

Anthropic, an AI startup backed by Google and Amazon, is locked in a dispute with the U.S. Department of Defense over safeguards in its AI technology, particularly its model Claude.

On Thursday, CEO Dario Amodei announced that the company cannot accede to the Pentagon's demands, which include removing restrictions that bar the AI from being used to target weapons autonomously or for mass domestic surveillance in the United States.

The Pentagon has a contract with Anthropic worth up to $200 million. However, the department insists on contracting only with AI firms that allow "any lawful use" of their technology, requiring the removal of such safeguards.

Amodei noted that uses like mass surveillance and fully autonomous weapons have never been part of their contracts and should not be included now. He revealed threats from the department to remove Anthropic from its systems, designate it a supply chain risk, and invoke the Defense Production Act to force the changes.

"Regardless, these threats do not change our position: we cannot in good conscience accede to their request," Amodei said.

In response, Pentagon spokesperson Sean Parnell posted on X that the department has no interest in using AI for mass surveillance of Americans or developing autonomous weapons without human involvement. "Here’s what we’re asking: Allow the Pentagon to use Anthropic’s model for all lawful purposes," Parnell said.

The Pentagon did not immediately respond to requests for comment on Anthropic's statement.

Amodei expressed hope that the department would reconsider, given the value of Anthropic's technology to the armed forces, and offered to facilitate a smooth transition if needed.

An Anthropic spokesperson added that the company is ready to continue discussions and is committed to operational continuity for the Department and America's warfighters.

Relaterade artiklar

Tense meeting between US Defense Secretary and Anthropic CEO over AI safety policy relaxation and military access.
Bild genererad av AI

Pentagon pressar Anthropic att försvaga AI-säkerhetsåtaganden

Rapporterad av AI Bild genererad av AI

USA:s försvarsminister Pete Hegseth har hotat Anthropic med svåra straff om inte företaget ger militären obegränsad tillgång till sin Claude AI-modell. Ultimatet kom under ett möte med VD Dario Amodei i Washington på tisdagen, samtidigt som Anthropic meddelade att man mildrar sin Responsible Scaling Policy. Förändringarna går från strikta säkerhetströsklar till mer flexibla riskbedömningar mitt i konkurrenstryck.

The Pentagon is considering ending its relationship with AI firm Anthropic due to disagreements over safeguards. Anthropic, the maker of the Claude AI model, has raised concerns about hard limits on fully autonomous weapons and mass domestic surveillance. This stems from the Pentagon's desire to apply AI models in warfighting scenarios, which Anthropic has declined.

Rapporterad av AI

Hundratals anställda på Google och OpenAI har skrivit under ett öppet brev i solidaritet med Anthropic och uppmanar sina företag att stå emot Pentagons krav på obegränsad militär användning av AI-modeller. Brevet motsätter sig användningar som inhemsk massövervakning och autonom dödande utan mänsklig översyn. Detta sker mitt i hot från USA:s försvarsminister Pete Hegseth om att beteckna Anthropic som en risk i försörjningskedjan.

Anthropics senaste uppdatering av sin CoWork-plattform har lett till betydande marknadsmässiga reaktioner i mjukvaruindustrin. USA:s mjukvarusektor utsattes för en omfattande säljoffensiv och förlorade över 1 biljon dollar i värde, enligt Fortune. Denna utveckling belyser osäkerheten bland investerare kring AI-native arbetsflöden och deras inverkan på SaaS-aktier.

Rapporterad av AI

En CNET-kronika hävdar att beskrivningar av AI som innehar mänskliga egenskaper som själar eller bekännelser vilseleder allmänheten och urholkar förtroendet för tekniken. Den belyser hur bolag som OpenAI och Anthropic använder sådant språk, som döljer verkliga problem som bias och säkerhet. Texten efterlyser mer precisa termer för att främja korrekt förståelse.

In 2025, AI agents became central to artificial intelligence progress, enabling systems to use tools and act autonomously. From theory to everyday applications, they transformed human interactions with large language models. Yet, they also brought challenges like security risks and regulatory gaps.

Rapporterad av AI

Anthropic has launched a legal plugin for its Claude Cowork tool, prompting concerns among dedicated legal AI providers. The plugin offers useful features for contract review and compliance but falls short of replacing specialized platforms. South African firms face additional hurdles due to data protection regulations.

 

 

 

Denna webbplats använder cookies

Vi använder cookies för analys för att förbättra vår webbplats. Läs vår integritetspolicy för mer information.
Avböj