Trump orders federal agencies to stop using Anthropic's AI

US President Donald Trump has directed federal agencies to immediately cease using Anthropic's Claude AI, following the company's refusal to allow its use for mass surveillance or autonomous weapons. The order includes a six-month phaseout period. This decision stems from ongoing clashes between Anthropic and the Department of Defense over AI restrictions.

On Friday, February 27, 2026, President Donald Trump announced via his Truth Social platform that he is instructing every federal agency to "IMMEDIATELY CEASE" use of Anthropic's AI tools. He described the company as a "RADICAL LEFT, WOKE COMPANY" and specified a six-month phaseout for agencies such as the Department of Defense.

The order follows weeks of tension between Anthropic and government officials regarding military applications of artificial intelligence. Anthropic's Claude AI is widely used across the Pentagon, including in classified systems. The Trump administration has pushed for its use in "any lawful purpose," but Anthropic's contract prohibits deployment for mass domestic surveillance of Americans or fully autonomous offensive weapons systems without human input.

Earlier this week, Defense Secretary Pete Hegseth informed Anthropic CEO Dario Amodei that he would invoke rarely used powers to either compel the removal of these restrictions or designate the company as a supply chain risk, potentially barring its use by the government and defense contractors. Hegseth set a Friday deadline for compliance.

In response, Amodei stated that the company, founded with a focus on AI safety, "cannot in good conscience accede to [the Pentagon's] request." He expressed concerns that powerful AI could enable mass surveillance by assembling scattered data into comprehensive profiles of individuals' lives at scale.

Michael Pastor, dean for technology law programs at New York Law School, noted that Anthropic is justified in seeking clarity on "lawful purposes," adding that unwillingness to specify on surveillance raises valid concerns.

Anthropic's stance aligns with similar policies at other firms; OpenAI CEO Sam Altman reportedly affirmed in an internal memo that his company maintains the same red lines against mass surveillance and autonomous weapons. Employees at Google and OpenAI have circulated a petition urging their companies to support Anthropic's position, warning against the Pentagon's strategy of division through fear.

Claude remains the most widely used AI system by the US military, with potential alternatives including tools from OpenAI, Google, or xAI.

Related Articles

President Trump signs executive order banning Anthropic AI in federal government amid military dispute, with symbolic AI restriction visuals.
Image generated by AI

Trump orders federal ban on Anthropic AI for government use

Reported by AI Image generated by AI

US President Donald Trump has directed all federal agencies to immediately cease using Anthropic's AI tools amid a dispute over military applications. The move follows weeks of clashes between Anthropic and Pentagon officials regarding restrictions on AI for mass surveillance and autonomous weapons. A six-month phase-out period has been announced.

US President Donald Trump has directed federal agencies to immediately cease using Anthropic's AI technology. The order follows a dispute with the Pentagon, where the company refused unconditional military use of its Claude models. Anthropic has vowed to challenge the Pentagon's ban in court.

Reported by AI

US Defense Secretary Pete Hegseth has threatened Anthropic with severe penalties unless the company grants the military unrestricted access to its Claude AI model. The ultimatum came during a meeting with CEO Dario Amodei in Washington on Tuesday, coinciding with Anthropic's announcement to relax its Responsible Scaling Policy. The changes shift from strict safety tripwires to more flexible risk assessments amid competitive pressures.

Anthropic has launched a legal plugin for its Claude Cowork tool, prompting concerns among dedicated legal AI providers. The plugin offers useful features for contract review and compliance but falls short of replacing specialized platforms. South African firms face additional hurdles due to data protection regulations.

Reported by AI

On February 5, 2026, Anthropic and OpenAI simultaneously launched products shifting users from chatting with AI to managing teams of AI agents. Anthropic introduced Claude Opus 4.6 with agent teams for developers, while OpenAI unveiled Frontier and GPT-5.3-Codex for enterprise workflows. These releases coincide with a $285 billion drop in software stocks amid fears of AI disrupting traditional SaaS vendors.

OpenAI is shifting resources toward improving its flagship chatbot ChatGPT, leading to the departure of several senior researchers. The San Francisco company faces intense competition from Google and Anthropic, prompting a strategic pivot from long-term research. This change has raised concerns about the future of innovative AI exploration at the firm.

Reported by AI

A CNET commentary argues that describing AI as having human-like qualities such as souls or confessions misleads the public and erodes trust in the technology. It highlights how companies like OpenAI and Anthropic use such language, which obscures real issues like bias and safety. The piece calls for more precise terminology to foster accurate understanding.

 

 

 

This website uses cookies

We use cookies for analytics to improve our site. Read our privacy policy for more information.
Decline