Claude Mythos leak heightens cyber threats for banks

Anthropic's latest AI model Claude Mythos has leaked despite being deemed too dangerous for public release. Financial institutions now face advanced AI-powered attacks capable of exploiting unknown vulnerabilities.

Anthropic describes Claude Mythos as highly risky. The model has already become available outside the company and can identify as well as exploit unknown security gaps when instructed.

Banks are especially vulnerable because they operate complex IT systems that combine old and new components. Professor Co-Pierre Georg from the Frankfurt School of Finance names three main threats: loss of sensitive customer data, disruption of critical systems, and subsequent loss of trust.

The German Finance Ministry is monitoring developments closely. It maintains close contact with the financial supervisory authority, the financial sector, and European as well as international partners to prepare for potential risks.

Related Articles

Illustration of US Treasury Secretary warning bank executives about AI cyberattack risks from Anthropic's Claude Mythos.
Image generated by AI

US Treasury warns banks of AI cyberattack risks following Anthropic's Claude Mythos announcement

Reported by AI Image generated by AI

In the wake of Anthropic's unveiling of its powerful Claude Mythos AI—capable of detecting and exploiting software vulnerabilities—the US Treasury Secretary has convened top bank executives to highlight escalating AI-driven cyber threats. The move underscores growing concerns as the AI is restricted to a tech coalition via Project Glasswing.

Germany's financial regulator BaFin has warned banks about risks from Anthropic's Claude Mythos AI model, following US Treasury alerts. The model autonomously detects IT vulnerabilities at scale, potentially accelerating cyberattacks. US banks are testing it amid restrictions.

Reported by AI

Anthropic has released a new cyber-focused AI model called Mythos, capable of detecting software flaws faster than humans and generating exploits. The model has raised alarms among governments and companies for potentially turbocharging hacking by exposing vulnerabilities quicker than they can be patched. Officials worldwide are scrambling to assess the risks.

Anthropic has restricted unlimited access to its Claude AI models through third-party agents like OpenClaw, requiring heavy users to pay extra via API keys or usage bundles starting April 4, 2026. The policy shift, announced over the weekend, addresses severe system strain from high-volume agent tools previously covered under $20 monthly subscriptions.

Reported by AI

Anthropic has discovered 14 high-severity security vulnerabilities in Firefox using its new Claude AI tools. The company states that AI enables faster detection of such issues. This finding was reported in a TechRadar article published on March 9, 2026.

After Anthropic CEO Dario Amodei said in late February that the company would not allow its Claude model to be used for mass domestic surveillance or fully autonomous weapons, senior Pentagon officials said they have no intention of using AI for domestic surveillance and insist that private firms cannot set binding limits on how the U.S. military employs AI tools.

Reported by AI

Anthropic has filed a federal lawsuit against the US Department of Defense, challenging its recent label of the AI company as a supply-chain risk. The dispute stems from a contract disagreement over the use of Anthropic's Claude AI for military purposes, including restrictions on mass surveillance and autonomous weapons. The company argues the designation violates free speech and due process rights.

 

 

 

This website uses cookies

We use cookies for analytics to improve our site. Read our privacy policy for more information.
Decline