Trump orders federal agencies to stop using Anthropic's AI

US President Donald Trump has directed federal agencies to immediately cease using Anthropic's Claude AI, following the company's refusal to allow its use for mass surveillance or autonomous weapons. The order includes a six-month phaseout period. This decision stems from ongoing clashes between Anthropic and the Department of Defense over AI restrictions.

On Friday, February 27, 2026, President Donald Trump announced via his Truth Social platform that he is instructing every federal agency to "IMMEDIATELY CEASE" use of Anthropic's AI tools. He described the company as a "RADICAL LEFT, WOKE COMPANY" and specified a six-month phaseout for agencies such as the Department of Defense.

The order follows weeks of tension between Anthropic and government officials regarding military applications of artificial intelligence. Anthropic's Claude AI is widely used across the Pentagon, including in classified systems. The Trump administration has pushed for its use in "any lawful purpose," but Anthropic's contract prohibits deployment for mass domestic surveillance of Americans or fully autonomous offensive weapons systems without human input.

Earlier this week, Defense Secretary Pete Hegseth informed Anthropic CEO Dario Amodei that he would invoke rarely used powers to either compel the removal of these restrictions or designate the company as a supply chain risk, potentially barring its use by the government and defense contractors. Hegseth set a Friday deadline for compliance.

In response, Amodei stated that the company, founded with a focus on AI safety, "cannot in good conscience accede to [the Pentagon's] request." He expressed concerns that powerful AI could enable mass surveillance by assembling scattered data into comprehensive profiles of individuals' lives at scale.

Michael Pastor, dean for technology law programs at New York Law School, noted that Anthropic is justified in seeking clarity on "lawful purposes," adding that unwillingness to specify on surveillance raises valid concerns.

Anthropic's stance aligns with similar policies at other firms; OpenAI CEO Sam Altman reportedly affirmed in an internal memo that his company maintains the same red lines against mass surveillance and autonomous weapons. Employees at Google and OpenAI have circulated a petition urging their companies to support Anthropic's position, warning against the Pentagon's strategy of division through fear.

Claude remains the most widely used AI system by the US military, with potential alternatives including tools from OpenAI, Google, or xAI.

ተያያዥ ጽሁፎች

President Trump signs executive order banning Anthropic AI in federal government amid military dispute, with symbolic AI restriction visuals.
በ AI የተሰራ ምስል

Trump orders federal ban on Anthropic AI for government use

በAI የተዘገበ በ AI የተሰራ ምስል

US President Donald Trump has directed all federal agencies to immediately cease using Anthropic's AI tools amid a dispute over military applications. The move follows weeks of clashes between Anthropic and Pentagon officials regarding restrictions on AI for mass surveillance and autonomous weapons. A six-month phase-out period has been announced.

US President Donald Trump has directed federal agencies to immediately cease using Anthropic's AI technology. The order follows a dispute with the Pentagon, where the company refused unconditional military use of its Claude models. Anthropic has vowed to challenge the Pentagon's ban in court.

በAI የተዘገበ

Anthropic has filed a federal lawsuit against the US Department of Defense, challenging its recent label of the AI company as a supply-chain risk. The dispute stems from a contract disagreement over the use of Anthropic's Claude AI for military purposes, including restrictions on mass surveillance and autonomous weapons. The company argues the designation violates free speech and due process rights.

Anthropic's Claude AI app has hit the top spot on Apple's App Store free apps chart, overtaking ChatGPT and Gemini, fueled by public support following President Trump's federal ban on the tool over Anthropic's AI safety refusals.

በAI የተዘገበ

A federal judge in San Francisco issued a preliminary injunction on March 27, 2026, blocking the Trump administration's designation of AI company Anthropic as a military supply chain risk—a label applied three weeks earlier amid disputes over the firm's limits on its Claude AI models for military uses like autonomous weapons.

Anthropic has limited access to its Claude Mythos Preview AI model due to its superior ability to detect and exploit software vulnerabilities, while launching Project Glasswing—a consortium with over 45 tech firms including Apple, Google, and Microsoft—to collaboratively patch flaws and bolster defenses. The announcement follows recent data leaks at the firm.

በAI የተዘገበ

Artificial intelligence (AI) has emerged at the center of modern warfare, playing an operational support role in the recent U.S.-Israeli strike on Iran. Anthropic's Claude and Palantir's Gotham were used for intelligence assessments and target identification. Experts predict further expansion of AI in military applications.

 

 

 

ይህ ድረ-ገጽ ኩኪዎችን ይጠቀማል

የእኛን ጣቢያ ለማሻሻል ለትንታኔ ኩኪዎችን እንጠቀማለን። የእኛን የሚስጥር ፖሊሲ አንብቡ የሚስጥር ፖሊሲ ለተጨማሪ መረጃ።
ውድቅ አድርግ