Google and OpenAI employees sign letter supporting Anthropic against Pentagon

Hundreds of employees from Google and OpenAI have signed an open letter in solidarity with Anthropic, urging their companies to resist Pentagon demands for unrestricted military use of AI models. The letter opposes uses involving domestic mass surveillance and autonomous killing without human oversight. This comes amid threats from US Defense Secretary Pete Hegseth to label Anthropic a supply chain risk.

The open letter, titled “We Will Not Be Divided,” calls on the leadership of Google and OpenAI to stand together against the Pentagon's requests. It specifically refuses demands to use AI models like Anthropic's Claude for domestic mass surveillance and autonomously killing people without human oversight. Anthropic CEO Dario Amodei has stated that these are lines no AI company should cross.

As of February 27, 2026, the letter has garnered over 450 signatures, with nearly 400 from Google employees and the remainder from OpenAI. About 50 percent of signatories chose to attach their names publicly, while the others remained anonymous. All signatures are verified as coming from current employees of the two companies. The organizers, who are unaffiliated with any AI company, political party, or advocacy group, initiated the effort independently.

This development is part of an ongoing standoff between Anthropic and US Defense Secretary Pete Hegseth. Hegseth has threatened to designate Anthropic a “supply chain risk” unless it withdraws certain guardrails for classified work. The Pentagon has been negotiating with Google and OpenAI on similar uses of their models for classified purposes, and xAI joined those talks earlier in the week. The letter contends that the government is attempting to divide the companies by instilling fear that others might comply.

OpenAI CEO Sam Altman addressed the issue in an internal memo, stating that his company will maintain the same red lines as Anthropic. In a CNBC interview on the same day, Altman expressed that he does not believe the Pentagon should threaten Defense Production Act measures against these companies. Separately, Amodei has affirmed Anthropic's position, saying, “We cannot in good conscience accede to their request.”

Verwandte Artikel

Tense meeting between US Defense Secretary and Anthropic CEO over AI safety policy relaxation and military access.
Bild generiert von KI

Pentagon pressures Anthropic to weaken AI safety commitments

Von KI berichtet Bild generiert von KI

US Defense Secretary Pete Hegseth has threatened Anthropic with severe penalties unless the company grants the military unrestricted access to its Claude AI model. The ultimatum came during a meeting with CEO Dario Amodei in Washington on Tuesday, coinciding with Anthropic's announcement to relax its Responsible Scaling Policy. The changes shift from strict safety tripwires to more flexible risk assessments amid competitive pressures.

Anthropics CEO Dario Amodei erklärte, dass das Unternehmen der Forderung des Pentagons nach Entfernung von Sicherheitsvorkehrungen aus seinen KI-Modellen nicht nachkommen werde, trotz Drohungen mit Ausschluss aus Verteidigungssystemen. Der Streit dreht sich um die Verhinderung der Nutzung der KI in autonomen Waffen und innerer Überwachung. Das Unternehmen, das einen Vertrag über 200 Millionen Dollar mit dem Verteidigungsministerium hat, betont sein Engagement für ethische KI-Nutzung.

Von KI berichtet

US President Donald Trump has directed federal agencies to immediately cease using Anthropic's AI technology. The order follows a dispute with the Pentagon, where the company refused unconditional military use of its Claude models. Anthropic has vowed to challenge the Pentagon's ban in court.

A CNET commentary argues that describing AI as having human-like qualities such as souls or confessions misleads the public and erodes trust in the technology. It highlights how companies like OpenAI and Anthropic use such language, which obscures real issues like bias and safety. The piece calls for more precise terminology to foster accurate understanding.

Von KI berichtet

Elon Musk's SpaceX and xAI are set to compete in a secret Pentagon contest to develop voice-controlled autonomous drone swarming technology, according to a report. The $100 million prize challenge, launched in January, will run for six months. The companies and the Pentagon's defense innovation unit did not respond to comment requests.

Anthropic has upgraded its Claude AI chatbot's free plan by adding previously paid features, positioning it as an ad-free alternative to OpenAI's ChatGPT. The enhancements include file creation, connectors to third-party services, and custom skills, amid OpenAI's plans to introduce ads in its free tier. This move follows Anthropic's Super Bowl advertisements criticizing the ad strategy.

Von KI berichtet

Anthropic's recent update to its CoWork platform has led to significant market reactions in the software industry. The U.S. software sector saw a widespread sell-off, losing over $1 trillion in value, according to Fortune. This development highlights investor uncertainty around AI-native workflows and their impact on SaaS stocks.

Sonntag, 01. März 2026, 08:19 Uhr

Claude AI app tops App Store amid backlash to US government ban

Samstag, 28. Februar 2026, 15:28 Uhr

Trump orders federal ban on Anthropic AI for government use

Freitag, 27. Februar 2026, 11:37 Uhr

Trump directs US agencies to end work with Anthropic

Freitag, 27. Februar 2026, 02:33 Uhr

Trump orders federal agencies to stop using Anthropic's AI

Donnerstag, 26. Februar 2026, 21:33 Uhr

Anthropic retires Claude 3 Opus and grants it a Substack newsletter

Montag, 16. Februar 2026, 13:12 Uhr

Pentagon may sever ties with Anthropic over AI safeguards

Donnerstag, 05. Februar 2026, 02:31 Uhr

Anthropic and OpenAI release AI agent management tools

Montag, 29. Dezember 2025, 20:12 Uhr

AI agents arrived in 2025

Freitag, 12. Dezember 2025, 05:25 Uhr

Pentagon launches Gemini-based AI platform

Mittwoch, 10. Dezember 2025, 22:39 Uhr

Linux Foundation launches Agentic AI Foundation

 

 

 

Diese Website verwendet Cookies

Wir verwenden Cookies für Analysen, um unsere Website zu verbessern. Lesen Sie unsere Datenschutzrichtlinie für weitere Informationen.
Ablehnen