La fuite de Claude Mythos accroît les cybermenaces pour les banques

Le dernier modèle d'IA d'Anthropic, Claude Mythos, a fuité bien qu'il ait été jugé trop dangereux pour une diffusion publique. Les institutions financières font désormais face à des attaques sophistiquées par IA capables d'exploiter des vulnérabilités inconnues.

Anthropic décrit Claude Mythos comme présentant des risques élevés. Le modèle est déjà disponible en dehors de l'entreprise et est capable d'identifier ainsi que d'exploiter des failles de sécurité inconnues sur commande.

Les banques sont particulièrement vulnérables car elles exploitent des systèmes informatiques complexes combinant des composants anciens et récents. Le professeur Co-Pierre Georg de la Frankfurt School of Finance identifie trois menaces majeures : la perte de données clients sensibles, la perturbation de systèmes critiques et la perte de confiance qui en découle.

Le ministère allemand des Finances suit l'évolution de la situation de près. Il maintient un contact étroit avec l'autorité de surveillance financière, le secteur financier, ainsi que ses partenaires européens et internationaux afin de se préparer aux risques potentiels.

Articles connexes

Illustration of US Treasury Secretary warning bank executives about AI cyberattack risks from Anthropic's Claude Mythos.
Image générée par IA

US Treasury warns banks of AI cyberattack risks following Anthropic's Claude Mythos announcement

Rapporté par l'IA Image générée par IA

In the wake of Anthropic's unveiling of its powerful Claude Mythos AI—capable of detecting and exploiting software vulnerabilities—the US Treasury Secretary has convened top bank executives to highlight escalating AI-driven cyber threats. The move underscores growing concerns as the AI is restricted to a tech coalition via Project Glasswing.

Germany's financial regulator BaFin has warned banks about risks from Anthropic's Claude Mythos AI model, following US Treasury alerts. The model autonomously detects IT vulnerabilities at scale, potentially accelerating cyberattacks. US banks are testing it amid restrictions.

Rapporté par l'IA

Anthropic has released a new cyber-focused AI model called Mythos, capable of detecting software flaws faster than humans and generating exploits. The model has raised alarms among governments and companies for potentially turbocharging hacking by exposing vulnerabilities quicker than they can be patched. Officials worldwide are scrambling to assess the risks.

Anthropic has restricted unlimited access to its Claude AI models through third-party agents like OpenClaw, requiring heavy users to pay extra via API keys or usage bundles starting April 4, 2026. The policy shift, announced over the weekend, addresses severe system strain from high-volume agent tools previously covered under $20 monthly subscriptions.

Rapporté par l'IA

Anthropic has discovered 14 high-severity security vulnerabilities in Firefox using its new Claude AI tools. The company states that AI enables faster detection of such issues. This finding was reported in a TechRadar article published on March 9, 2026.

After Anthropic CEO Dario Amodei said in late February that the company would not allow its Claude model to be used for mass domestic surveillance or fully autonomous weapons, senior Pentagon officials said they have no intention of using AI for domestic surveillance and insist that private firms cannot set binding limits on how the U.S. military employs AI tools.

Rapporté par l'IA

Anthropic has filed a federal lawsuit against the US Department of Defense, challenging its recent label of the AI company as a supply-chain risk. The dispute stems from a contract disagreement over the use of Anthropic's Claude AI for military purposes, including restrictions on mass surveillance and autonomous weapons. The company argues the designation violates free speech and due process rights.

 

 

 

Ce site utilise des cookies

Nous utilisons des cookies pour l'analyse afin d'améliorer notre site. Lisez notre politique de confidentialité pour plus d'informations.
Refuser