Tense meeting between US Defense Secretary and Anthropic CEO over AI safety policy relaxation and military access.
Tense meeting between US Defense Secretary and Anthropic CEO over AI safety policy relaxation and military access.
AI 生成的图像

Pentagon pressures Anthropic to weaken AI safety commitments

AI 生成的图像

US Defense Secretary Pete Hegseth has threatened Anthropic with severe penalties unless the company grants the military unrestricted access to its Claude AI model. The ultimatum came during a meeting with CEO Dario Amodei in Washington on Tuesday, coinciding with Anthropic's announcement to relax its Responsible Scaling Policy. The changes shift from strict safety tripwires to more flexible risk assessments amid competitive pressures.

On February 25, 2026, US Defense Secretary Pete Hegseth summoned Anthropic CEO Dario Amodei to Washington for discussions on the company's AI usage policies. Hegseth demanded that Anthropic allow its Claude model to be used in all lawful military applications, including potentially sensitive areas like mass surveillance and lethal missions without direct human oversight. Anthropic has expressed concerns about the reliability of current AI models for such uses, offering instead to apply its standard usage policies to government contracts while prohibiting applications like autonomous weapons or domestic surveillance without human involvement.

Hegseth set a deadline of Friday, February 27, warning that failure to comply could lead to invocation of the Defense Production Act, designation of Anthropic as a supply chain risk, and exclusion from Department of Defense contracts. The company holds a $200 million contract with the Pentagon, and Claude has been utilized in classified operations, such as the January 2026 capture of Venezuelan leader Nicolás Maduro in collaboration with Palantir.

The same day, Anthropic announced modifications to its Responsible Scaling Policy, moving away from hard commitments to halt model training unless safety could be guaranteed in advance. The updated policy adopts a relative approach, emphasizing risk reports and frontier safety roadmaps to provide transparency. Anthropic cited a 'collective action problem' in the competitive AI landscape, noting that unilateral pauses would disadvantage responsible developers while others advance without mitigations.

Chief science officer Jared Kaplan stated, 'We felt that it wouldn't actually help anyone for us to stop training AI models,' highlighting the rapid pace of industry progress. Chris Painter of METR described the shift as understandable but warned of a potential 'frog-boiling' effect, where flexible safety measures could erode over time. Anthropic maintains it is engaging in good-faith talks to support national security responsibly. The Pentagon is also negotiating with rivals like OpenAI, Google, and xAI to integrate their technologies into military systems.

人们在说什么

X users predominantly express alarm and criticism toward Defense Secretary Pete Hegseth's ultimatum to Anthropic, supporting the company's safeguards against AI for autonomous weapons and mass surveillance. High-engagement posts detail threats of Defense Production Act invocation and note the timing with Anthropic's Responsible Scaling Policy relaxation. Sentiments include outrage over government pressure, skepticism about safety commitments, and neutral reporting from journalists.

相关文章

Illustrative photo of Pentagon challenging Anthropic's limits on Claude AI for military use during strained contract talks.
AI 生成的图像

Pentagon disputes Anthropic limits on Claude’s military use as contract talks strain

由 AI 报道 AI 生成的图像 事实核查

After Anthropic CEO Dario Amodei said in late February that the company would not allow its Claude model to be used for mass domestic surveillance or fully autonomous weapons, senior Pentagon officials said they have no intention of using AI for domestic surveillance and insist that private firms cannot set binding limits on how the U.S. military employs AI tools.

Anthropic's CEO Dario Amodei stated that the company will not comply with the Pentagon's request to remove safeguards from its AI models, despite threats of exclusion from defense systems. The dispute centers on preventing the AI's use in autonomous weapons and domestic surveillance. The firm, which has a $200 million contract with the Department of Defense, emphasizes its commitment to ethical AI use.

由 AI 报道

The Pentagon is considering ending its relationship with AI firm Anthropic due to disagreements over safeguards. Anthropic, the maker of the Claude AI model, has raised concerns about hard limits on fully autonomous weapons and mass domestic surveillance. This stems from the Pentagon's desire to apply AI models in warfighting scenarios, which Anthropic has declined.

美国总统唐纳德·特朗普周五表示,他正在指示政府机构停止与Anthropic合作。五角大楼计划将该初创公司宣布为供应链风险,这是在技术护栏争端后的一次重大打击。使用该公司产品的机构将有六个月的逐步淘汰期。

由 AI 报道

Anthropic has limited access to its Claude Mythos Preview AI model due to its superior ability to detect and exploit software vulnerabilities, while launching Project Glasswing—a consortium with over 45 tech firms including Apple, Google, and Microsoft—to collaboratively patch flaws and bolster defenses. The announcement follows recent data leaks at the firm.

Anthropic has announced that its AI chatbot Claude will remain free of advertisements, contrasting sharply with rival OpenAI's recent decision to test ads in ChatGPT. The company launched a Super Bowl ad campaign mocking AI assistants that interrupt conversations with product pitches. This move highlights growing tensions in the competitive AI landscape.

由 AI 报道

Anthropic announced on Wednesday the launch of Claude Managed Agents, a new product aimed at simplifying the creation and deployment of AI agents for businesses. The tool provides developers with ready-made infrastructure to build autonomous AI systems. It addresses a key barrier in automating work tasks amid the company's rapid enterprise growth.

 

 

 

此网站使用 cookie

我们使用 cookie 进行分析以改进我们的网站。阅读我们的 隐私政策 以获取更多信息。
拒绝