Illustrative photo of Pentagon challenging Anthropic's limits on Claude AI for military use during strained contract talks.
Illustrative photo of Pentagon challenging Anthropic's limits on Claude AI for military use during strained contract talks.
AI에 의해 생성된 이미지

Pentagon disputes Anthropic limits on Claude’s military use as contract talks strain

AI에 의해 생성된 이미지
사실 확인됨

After Anthropic CEO Dario Amodei said in late February that the company would not allow its Claude model to be used for mass domestic surveillance or fully autonomous weapons, senior Pentagon officials said they have no intention of using AI for domestic surveillance and insist that private firms cannot set binding limits on how the U.S. military employs AI tools.

Last July, the Pentagon’s chief digital and artificial intelligence officer, Doug Matty, announced contract awards of up to $200 million each to four tech companies—Anthropic, Google, OpenAI, and xAI—to provide advanced AI models for Defense Department missions. Matty said the department intended to speed adoption of commercial AI for “Joint mission essential tasks” in the “warfighting domain,” but the Pentagon released few operational details, citing national security.

The relatively opaque awards drew fresh attention at the end of February, when Anthropic said it was insisting on limits for Claude in a “narrow set of cases.” In a Feb. 26 statement, Amodei said he strongly supported using AI to help defend the United States and other democracies, but argued that some applications could undermine democratic values—including “mass domestic surveillance” and “fully autonomous weapons,” which he described as self-guided combat drones.

Senior Defense Department officials responded by pushing back on both the premise and the company’s leverage. According to reporting cited by The Nation, Pentagon officials said they do not intend to use AI for domestic surveillance and that unmanned weapons systems will remain under human oversight. But they also argued that contractors should not be able to impose their own civil-liberties conditions on Pentagon operations. Emil Michael, the undersecretary of defense for research and engineering, was quoted as saying: “We won’t have any BigTech company decide Americans’ civil liberties.”

The Nation reported that, during negotiations, Michael also raised a separate question about whether Anthropic would oppose the use of Claude in nuclear-related missions such as missile defense, and that Amodei did not object to that use.

The dispute has highlighted a broader tension between the Pentagon’s push to integrate generative AI into intelligence, targeting and weapons development—and the guardrails AI companies say they need to prevent misuse. The Nation pointed to longstanding Defense Department efforts such as Project Maven, which began by using AI to help analyze drone video for potential targets, and DARPA’s Collaborative Operations in Denied Environment (CODE) initiative, which has worked on autonomy for groups of drones operating under preset rules.

Official Pentagon policy on autonomy is outlined in DoD Directive 3000.09, which states that autonomous and semi-autonomous weapons should be designed so commanders and operators can exercise “appropriate levels of human judgment over the use of force.” Critics have argued that the policy’s flexibility still leaves room for autonomy that could significantly reduce real-time human control.

As AI becomes more integrated into military planning and operations, the Anthropic-Pentagon standoff underscores an unresolved question at the center of the U.S. military’s AI expansion: how to reconcile rapid adoption of commercial systems with demands for enforceable limits on domestic surveillance and the delegation of lethal force to machines.

사람들이 말하는 것

X discussions reveal a divide on the Pentagon-Anthropic dispute over Claude AI limits. Supporters of Anthropic commend their ethical stance against mass surveillance and autonomous weapons, viewing the Pentagon's blacklisting as overreach. Critics argue private firms cannot impose restrictions on military use and that simple contract terms suffice. Neutral posts detail the standoff, deadlines, and legal escalations, contrasting with OpenAI's compliance. High-engagement accounts from journalists and analysts highlight national security vs. AI safety tensions.

관련 기사

Dramatic illustration of Pentagon designating Anthropic's Claude AI a supply chain risk after military usage dispute.
AI에 의해 생성된 이미지

Pentagon designates Anthropic a ‘supply chain risk’ after dispute over military use limits for Claude AI

AI에 의해 보고됨 AI에 의해 생성된 이미지 사실 확인됨

The Pentagon has formally notified AI company Anthropic that it is deemed a “supply chain risk,” a rare designation that critics say is typically aimed at adversary-linked technology. The move follows a breakdown in negotiations over whether the U.S. military can use Anthropic’s Claude models for all lawful purposes, versus contractual limits the company says are needed to prevent fully autonomous weapons and mass domestic surveillance.

The Pentagon is considering ending its relationship with AI firm Anthropic due to disagreements over safeguards. Anthropic, the maker of the Claude AI model, has raised concerns about hard limits on fully autonomous weapons and mass domestic surveillance. This stems from the Pentagon's desire to apply AI models in warfighting scenarios, which Anthropic has declined.

AI에 의해 보고됨

Anthropic's CEO Dario Amodei stated that the company will not comply with the Pentagon's request to remove safeguards from its AI models, despite threats of exclusion from defense systems. The dispute centers on preventing the AI's use in autonomous weapons and domestic surveillance. The firm, which has a $200 million contract with the Department of Defense, emphasizes its commitment to ethical AI use.

미국-이스라엘의 최근 이란 공격에서 인공지능(AI)이 작전 지원 역할을 수행하며 현대 전쟁의 중심으로 부상했다. Anthropic의 Claude와 Palantir의 Gotham이 정보 분석과 목표 식별에 활용됐다. 전문가들은 AI의 군사 적용이 확대될 것으로 전망한다.

AI에 의해 보고됨

미국 대통령 도널드 트럼프는 금요일 정부 기관들에 앤트로픽과의 업무를 중단하도록 지시한다고 밝혔다. 펜타곤은 이 스타트업을 공급망 위험으로 선언할 계획이며, 이는 기술 가드레일 논쟁 후의 주요 타격이다. 회사의 제품을 사용하는 기관들은 6개월의 단계적 철수 기간을 가질 예정이다.

보고서에 따르면, 일론 머스크의 SpaceX와 xAI가 음성 제어 자율 드론 스웜 기술 개발을 위한 비밀 펜타곤 대회에 경쟁할 예정이다. 1월에 시작된 1억 달러 상금 챌린지는 6개월간 진행된다. 해당 회사들과 펜타곤 국방 혁신 유닛은 논평 요청에 응답하지 않았다.

AI에 의해 보고됨

Anthropic has launched a legal plugin for its Claude Cowork tool, prompting concerns among dedicated legal AI providers. The plugin offers useful features for contract review and compliance but falls short of replacing specialized platforms. South African firms face additional hurdles due to data protection regulations.

 

 

 

이 웹사이트는 쿠키를 사용합니다

사이트를 개선하기 위해 분석을 위한 쿠키를 사용합니다. 자세한 내용은 개인정보 보호 정책을 읽으세요.
거부