Anthropic cannot meet Pentagon's AI safeguards demand, CEO says

Anthropic's CEO Dario Amodei stated that the company will not comply with the Pentagon's request to remove safeguards from its AI models, despite threats of exclusion from defense systems. The dispute centers on preventing the AI's use in autonomous weapons and domestic surveillance. The firm, which has a $200 million contract with the Department of Defense, emphasizes its commitment to ethical AI use.

Anthropic, an AI startup backed by Google and Amazon, is locked in a dispute with the U.S. Department of Defense over safeguards in its AI technology, particularly its model Claude.

On Thursday, CEO Dario Amodei announced that the company cannot accede to the Pentagon's demands, which include removing restrictions that bar the AI from being used to target weapons autonomously or for mass domestic surveillance in the United States.

The Pentagon has a contract with Anthropic worth up to $200 million. However, the department insists on contracting only with AI firms that allow "any lawful use" of their technology, requiring the removal of such safeguards.

Amodei noted that uses like mass surveillance and fully autonomous weapons have never been part of their contracts and should not be included now. He revealed threats from the department to remove Anthropic from its systems, designate it a supply chain risk, and invoke the Defense Production Act to force the changes.

"Regardless, these threats do not change our position: we cannot in good conscience accede to their request," Amodei said.

In response, Pentagon spokesperson Sean Parnell posted on X that the department has no interest in using AI for mass surveillance of Americans or developing autonomous weapons without human involvement. "Here’s what we’re asking: Allow the Pentagon to use Anthropic’s model for all lawful purposes," Parnell said.

The Pentagon did not immediately respond to requests for comment on Anthropic's statement.

Amodei expressed hope that the department would reconsider, given the value of Anthropic's technology to the armed forces, and offered to facilitate a smooth transition if needed.

An Anthropic spokesperson added that the company is ready to continue discussions and is committed to operational continuity for the Department and America's warfighters.

Artikel Terkait

Tense meeting between US Defense Secretary and Anthropic CEO over AI safety policy relaxation and military access.
Gambar dihasilkan oleh AI

Pentagon pressures Anthropic to weaken AI safety commitments

Dilaporkan oleh AI Gambar dihasilkan oleh AI

US Defense Secretary Pete Hegseth has threatened Anthropic with severe penalties unless the company grants the military unrestricted access to its Claude AI model. The ultimatum came during a meeting with CEO Dario Amodei in Washington on Tuesday, coinciding with Anthropic's announcement to relax its Responsible Scaling Policy. The changes shift from strict safety tripwires to more flexible risk assessments amid competitive pressures.

Anthropic has filed a federal lawsuit against the US Department of Defense, challenging its recent label of the AI company as a supply-chain risk. The dispute stems from a contract disagreement over the use of Anthropic's Claude AI for military purposes, including restrictions on mass surveillance and autonomous weapons. The company argues the designation violates free speech and due process rights.

Dilaporkan oleh AI Fakta terverifikasi

After Anthropic CEO Dario Amodei said in late February that the company would not allow its Claude model to be used for mass domestic surveillance or fully autonomous weapons, senior Pentagon officials said they have no intention of using AI for domestic surveillance and insist that private firms cannot set binding limits on how the U.S. military employs AI tools.

US President Donald Trump stated on Friday that he is directing government agencies to stop working with Anthropic. The Pentagon plans to declare the startup a supply-chain risk, marking a major blow following a showdown over technology guardrails. Agencies using the company's products will have a six-month phase-out period.

Dilaporkan oleh AI

Anthropic has launched the Anthropic Institute, a new research initiative, and opened its first Public Policy office in Washington, DC, this spring. These steps follow the AI company's recent federal lawsuit against the US government over a Defense Department supply chain risk designation tied to a contract dispute.

Artificial intelligence (AI) has emerged at the center of modern warfare, playing an operational support role in the recent U.S.-Israeli strike on Iran. Anthropic's Claude and Palantir's Gotham were used for intelligence assessments and target identification. Experts predict further expansion of AI in military applications.

Dilaporkan oleh AI

Anthropic has announced that its AI chatbot Claude will remain free of advertisements, contrasting sharply with rival OpenAI's recent decision to test ads in ChatGPT. The company launched a Super Bowl ad campaign mocking AI assistants that interrupt conversations with product pitches. This move highlights growing tensions in the competitive AI landscape.

 

 

 

Situs web ini menggunakan cookie

Kami menggunakan cookie untuk analisis guna meningkatkan situs kami. Baca kebijakan privasi kami untuk informasi lebih lanjut.
Tolak