Anthropic cannot meet Pentagon's AI safeguards demand, CEO says

Anthropic's CEO Dario Amodei stated that the company will not comply with the Pentagon's request to remove safeguards from its AI models, despite threats of exclusion from defense systems. The dispute centers on preventing the AI's use in autonomous weapons and domestic surveillance. The firm, which has a $200 million contract with the Department of Defense, emphasizes its commitment to ethical AI use.

Anthropic, an AI startup backed by Google and Amazon, is locked in a dispute with the U.S. Department of Defense over safeguards in its AI technology, particularly its model Claude.

On Thursday, CEO Dario Amodei announced that the company cannot accede to the Pentagon's demands, which include removing restrictions that bar the AI from being used to target weapons autonomously or for mass domestic surveillance in the United States.

The Pentagon has a contract with Anthropic worth up to $200 million. However, the department insists on contracting only with AI firms that allow "any lawful use" of their technology, requiring the removal of such safeguards.

Amodei noted that uses like mass surveillance and fully autonomous weapons have never been part of their contracts and should not be included now. He revealed threats from the department to remove Anthropic from its systems, designate it a supply chain risk, and invoke the Defense Production Act to force the changes.

"Regardless, these threats do not change our position: we cannot in good conscience accede to their request," Amodei said.

In response, Pentagon spokesperson Sean Parnell posted on X that the department has no interest in using AI for mass surveillance of Americans or developing autonomous weapons without human involvement. "Here’s what we’re asking: Allow the Pentagon to use Anthropic’s model for all lawful purposes," Parnell said.

The Pentagon did not immediately respond to requests for comment on Anthropic's statement.

Amodei expressed hope that the department would reconsider, given the value of Anthropic's technology to the armed forces, and offered to facilitate a smooth transition if needed.

An Anthropic spokesperson added that the company is ready to continue discussions and is committed to operational continuity for the Department and America's warfighters.

Artikel Terkait

Tense meeting between US Defense Secretary and Anthropic CEO over AI safety policy relaxation and military access.
Gambar dihasilkan oleh AI

Pentagon menekan Anthropic untuk melemahkan komitmen keselamatan AI

Dilaporkan oleh AI Gambar dihasilkan oleh AI

Menteri Pertahanan AS Pete Hegseth telah mengancam Anthropic dengan sanksi berat kecuali perusahaan itu memberikan akses tak terbatas kepada militer untuk model AI Claude-nya. Ultimatum itu disampaikan selama pertemuan dengan CEO Dario Amodei di Washington pada Selasa, bertepatan dengan pengumuman Anthropic untuk melonggarkan Responsible Scaling Policy-nya. Perubahan tersebut beralih dari pemicu keselamatan ketat ke penilaian risiko yang lebih fleksibel di tengah tekanan kompetitif.

The Pentagon is considering ending its relationship with AI firm Anthropic due to disagreements over safeguards. Anthropic, the maker of the Claude AI model, has raised concerns about hard limits on fully autonomous weapons and mass domestic surveillance. This stems from the Pentagon's desire to apply AI models in warfighting scenarios, which Anthropic has declined.

Dilaporkan oleh AI

Ratusan karyawan dari Google dan OpenAI telah menandatangani surat terbuka solidaritas dengan Anthropic, mendesak perusahaan mereka untuk menolak tuntutan Pentagon atas penggunaan model AI militer tanpa batasan. Surat itu menentang penggunaan yang melibatkan pengawasan massal domestik dan pembunuhan otonom tanpa pengawasan manusia. Ini datang di tengah ancaman dari Menteri Pertahanan AS Pete Hegseth untuk menandai Anthropic sebagai risiko rantai pasok.

Pembaruan terbaru Anthropic pada platform CoWork-nya telah memicu reaksi pasar yang signifikan di industri perangkat lunak. Sektor perangkat lunak AS mengalami penjualan besar-besaran yang meluas, kehilangan nilai lebih dari $1 triliun, menurut Fortune. Perkembangan ini menyoroti ketidakpastian investor seputar alur kerja asli AI dan dampaknya terhadap saham SaaS.

Dilaporkan oleh AI

Sebuah komentar CNET berargumen bahwa menggambarkan AI memiliki kualitas seperti manusia seperti jiwa atau pengakuan menyesatkan publik dan mengikis kepercayaan terhadap teknologi. Ini menyoroti bagaimana perusahaan seperti OpenAI dan Anthropic menggunakan bahasa tersebut, yang menyamarkan isu nyata seperti bias dan keamanan. Artikel tersebut menyerukan terminologi yang lebih tepat untuk mendorong pemahaman yang akurat.

In 2025, AI agents became central to artificial intelligence progress, enabling systems to use tools and act autonomously. From theory to everyday applications, they transformed human interactions with large language models. Yet, they also brought challenges like security risks and regulatory gaps.

Dilaporkan oleh AI

Anthropic has launched a legal plugin for its Claude Cowork tool, prompting concerns among dedicated legal AI providers. The plugin offers useful features for contract review and compliance but falls short of replacing specialized platforms. South African firms face additional hurdles due to data protection regulations.

 

 

 

Situs web ini menggunakan cookie

Kami menggunakan cookie untuk analisis guna meningkatkan situs kami. Baca kebijakan privasi kami untuk informasi lebih lanjut.
Tolak