Illustrative photo of Pentagon challenging Anthropic's limits on Claude AI for military use during strained contract talks.
Illustrative photo of Pentagon challenging Anthropic's limits on Claude AI for military use during strained contract talks.
Bild generiert von KI

Pentagon disputes Anthropic limits on Claude’s military use as contract talks strain

Bild generiert von KI
Fakten geprüft

After Anthropic CEO Dario Amodei said in late February that the company would not allow its Claude model to be used for mass domestic surveillance or fully autonomous weapons, senior Pentagon officials said they have no intention of using AI for domestic surveillance and insist that private firms cannot set binding limits on how the U.S. military employs AI tools.

Last July, the Pentagon’s chief digital and artificial intelligence officer, Doug Matty, announced contract awards of up to $200 million each to four tech companies—Anthropic, Google, OpenAI, and xAI—to provide advanced AI models for Defense Department missions. Matty said the department intended to speed adoption of commercial AI for “Joint mission essential tasks” in the “warfighting domain,” but the Pentagon released few operational details, citing national security.

The relatively opaque awards drew fresh attention at the end of February, when Anthropic said it was insisting on limits for Claude in a “narrow set of cases.” In a Feb. 26 statement, Amodei said he strongly supported using AI to help defend the United States and other democracies, but argued that some applications could undermine democratic values—including “mass domestic surveillance” and “fully autonomous weapons,” which he described as self-guided combat drones.

Senior Defense Department officials responded by pushing back on both the premise and the company’s leverage. According to reporting cited by The Nation, Pentagon officials said they do not intend to use AI for domestic surveillance and that unmanned weapons systems will remain under human oversight. But they also argued that contractors should not be able to impose their own civil-liberties conditions on Pentagon operations. Emil Michael, the undersecretary of defense for research and engineering, was quoted as saying: “We won’t have any BigTech company decide Americans’ civil liberties.”

The Nation reported that, during negotiations, Michael also raised a separate question about whether Anthropic would oppose the use of Claude in nuclear-related missions such as missile defense, and that Amodei did not object to that use.

The dispute has highlighted a broader tension between the Pentagon’s push to integrate generative AI into intelligence, targeting and weapons development—and the guardrails AI companies say they need to prevent misuse. The Nation pointed to longstanding Defense Department efforts such as Project Maven, which began by using AI to help analyze drone video for potential targets, and DARPA’s Collaborative Operations in Denied Environment (CODE) initiative, which has worked on autonomy for groups of drones operating under preset rules.

Official Pentagon policy on autonomy is outlined in DoD Directive 3000.09, which states that autonomous and semi-autonomous weapons should be designed so commanders and operators can exercise “appropriate levels of human judgment over the use of force.” Critics have argued that the policy’s flexibility still leaves room for autonomy that could significantly reduce real-time human control.

As AI becomes more integrated into military planning and operations, the Anthropic-Pentagon standoff underscores an unresolved question at the center of the U.S. military’s AI expansion: how to reconcile rapid adoption of commercial systems with demands for enforceable limits on domestic surveillance and the delegation of lethal force to machines.

Was die Leute sagen

X discussions reveal a divide on the Pentagon-Anthropic dispute over Claude AI limits. Supporters of Anthropic commend their ethical stance against mass surveillance and autonomous weapons, viewing the Pentagon's blacklisting as overreach. Critics argue private firms cannot impose restrictions on military use and that simple contract terms suffice. Neutral posts detail the standoff, deadlines, and legal escalations, contrasting with OpenAI's compliance. High-engagement accounts from journalists and analysts highlight national security vs. AI safety tensions.

Verwandte Artikel

Dramatic illustration of Pentagon designating Anthropic's Claude AI a supply chain risk after military usage dispute.
Bild generiert von KI

Pentagon designates Anthropic a ‘supply chain risk’ after dispute over military use limits for Claude AI

Von KI berichtet Bild generiert von KI Fakten geprüft

The Pentagon has formally notified AI company Anthropic that it is deemed a “supply chain risk,” a rare designation that critics say is typically aimed at adversary-linked technology. The move follows a breakdown in negotiations over whether the U.S. military can use Anthropic’s Claude models for all lawful purposes, versus contractual limits the company says are needed to prevent fully autonomous weapons and mass domestic surveillance.

The Pentagon is considering ending its relationship with AI firm Anthropic due to disagreements over safeguards. Anthropic, the maker of the Claude AI model, has raised concerns about hard limits on fully autonomous weapons and mass domestic surveillance. This stems from the Pentagon's desire to apply AI models in warfighting scenarios, which Anthropic has declined.

Von KI berichtet

Anthropics CEO Dario Amodei erklärte, dass das Unternehmen der Forderung des Pentagons nach Entfernung von Sicherheitsvorkehrungen aus seinen KI-Modellen nicht nachkommen werde, trotz Drohungen mit Ausschluss aus Verteidigungssystemen. Der Streit dreht sich um die Verhinderung der Nutzung der KI in autonomen Waffen und innerer Überwachung. Das Unternehmen, das einen Vertrag über 200 Millionen Dollar mit dem Verteidigungsministerium hat, betont sein Engagement für ethische KI-Nutzung.

Artificial intelligence (AI) has emerged at the center of modern warfare, playing an operational support role in the recent U.S.-Israeli strike on Iran. Anthropic's Claude and Palantir's Gotham were used for intelligence assessments and target identification. Experts predict further expansion of AI in military applications.

Von KI berichtet

US President Donald Trump stated on Friday that he is directing government agencies to stop working with Anthropic. The Pentagon plans to declare the startup a supply-chain risk, marking a major blow following a showdown over technology guardrails. Agencies using the company's products will have a six-month phase-out period.

Elon Musk's SpaceX and xAI are set to compete in a secret Pentagon contest to develop voice-controlled autonomous drone swarming technology, according to a report. The $100 million prize challenge, launched in January, will run for six months. The companies and the Pentagon's defense innovation unit did not respond to comment requests.

Von KI berichtet

Anthropic hat ein Legal-Plugin für sein Claude Cowork-Tool lanciert, was Bedenken bei spezialisierten Legal-AI-Anbietern auslöst. Das Plugin bietet nützliche Funktionen für Vertragsprüfungen und Compliance, reicht aber nicht aus, um spezialisierte Plattformen zu ersetzen. Südafrikanische Unternehmen stehen vor zusätzlichen Hürden durch Datenschutzvorschriften.

 

 

 

Diese Website verwendet Cookies

Wir verwenden Cookies für Analysen, um unsere Website zu verbessern. Lesen Sie unsere Datenschutzrichtlinie für weitere Informationen.
Ablehnen