Claude AIアプリ、米政府の禁止令に対する反発の中でApp Store首位に

AnthropicのClaude AIアプリがAppleのApp Store無料アプリランキングでトップに躍り出た。ChatGPTとGeminiを抜き、Trump大統領がAnthropicのAI安全基準拒否を理由に同ツールを連邦政府で禁止した後の公衆支持が後押しした。

米国政府との緊張が高まる中、消費者市場からの劇的な反応として、AnthropicのClaude AIアプリが2026年3月1日時点でApp Storeの無料アプリトップランキング1位に登り詰め、OpenAIのChatGPTを2位に、Google Geminiを3位に押し落とした。 この急上昇は、Trump大統領の2月27日命令に続くもので、連邦機関によるClaude使用を禁止し、Anthropicが大量監視や自律兵器に対する安全ガードレールを解除することを拒否したことがきっかけだ—詳細は過去の報道で扱った。国防長官Pete Hegsethは、同社が安全優先を貫いた後、Anthropicを「サプライチェーンリスク」と指定すると脅していた。 公衆の反発がダウンロードを促進し、連邦制限にもかかわらずClaudeの知名度を高めた。OpenAIは国防総省との契約で空白を埋めた。 OpenAIのCEO Sam AltmanはXのAMAで、Anthropicへのリスク指定を「非常に悪い決定」「恐ろしい前例」と呼び、逆転とより良い結果を望むと述べた。 この出来事は、AI企業の安全姿勢と政府要求の間の溝を浮き彫りにし、連邦アクセスが終了してもClaudeは商業的に成功を収めている。

関連記事

President Trump signs executive order banning Anthropic AI in federal government amid military dispute, with symbolic AI restriction visuals.
AIによって生成された画像

Trump orders federal ban on Anthropic AI for government use

AIによるレポート AIによって生成された画像

US President Donald Trump has directed all federal agencies to immediately cease using Anthropic's AI tools amid a dispute over military applications. The move follows weeks of clashes between Anthropic and Pentagon officials regarding restrictions on AI for mass surveillance and autonomous weapons. A six-month phase-out period has been announced.

US President Donald Trump has directed federal agencies to immediately cease using Anthropic's Claude AI, following the company's refusal to allow its use for mass surveillance or autonomous weapons. The order includes a six-month phaseout period. This decision stems from ongoing clashes between Anthropic and the Department of Defense over AI restrictions.

AIによるレポート

US President Donald Trump has directed federal agencies to immediately cease using Anthropic's AI technology. The order follows a dispute with the Pentagon, where the company refused unconditional military use of its Claude models. Anthropic has vowed to challenge the Pentagon's ban in court.

After Anthropic CEO Dario Amodei said in late February that the company would not allow its Claude model to be used for mass domestic surveillance or fully autonomous weapons, senior Pentagon officials said they have no intention of using AI for domestic surveillance and insist that private firms cannot set binding limits on how the U.S. military employs AI tools.

AIによるレポート 事実確認済み

The Pentagon has formally notified AI company Anthropic that it is deemed a “supply chain risk,” a rare designation that critics say is typically aimed at adversary-linked technology. The move follows a breakdown in negotiations over whether the U.S. military can use Anthropic’s Claude models for all lawful purposes, versus contractual limits the company says are needed to prevent fully autonomous weapons and mass domestic surveillance.

In the wake of Anthropic's unveiling of its powerful Claude Mythos AI—capable of detecting and exploiting software vulnerabilities—the US Treasury Secretary has convened top bank executives to highlight escalating AI-driven cyber threats. The move underscores growing concerns as the AI is restricted to a tech coalition via Project Glasswing.

AIによるレポート

Anthropic has released a beta add-on bringing its Claude AI assistant to Microsoft Word, available now to customers on Team and Enterprise plans. The integration allows users to generate new content, edit documents, and handle comments within the app. It offers an alternative to Microsoft's Copilot.

 

 

 

このウェブサイトはCookieを使用します

サイトを改善するための分析にCookieを使用します。詳細については、プライバシーポリシーをお読みください。
拒否