Läckan av Claude Mythos ökar it-hoten mot banker

Anthropic senaste AI-modell Claude Mythos har läckt trots att den bedömts som för farlig för att släppas offentligt. Finansiella institut står nu inför avancerade AI-drivna attacker som kan utnyttja okända sårbarheter.

Anthropic beskriver Claude Mythos som högrisk. Modellen har redan blivit tillgänglig utanför företaget och kan identifiera samt utnyttja okända säkerhetsluckor på instruktion.

Banker är särskilt sårbara eftersom de driver komplexa it-system som kombinerar gamla och nya komponenter. Professor Co-Pierre Georg från Frankfurt School of Finance pekar på tre huvudsakliga hot: förlust av känslig kunddata, störningar i kritiska system och en efterföljande förlust av förtroende.

Det tyska finansdepartementet följer utvecklingen noggrant. De upprätthåller tät kontakt med finansinspektionen, finanssektorn samt europeiska och internationella partners för att förbereda sig på potentiella risker.

Relaterade artiklar

Illustration of US Treasury Secretary warning bank executives about AI cyberattack risks from Anthropic's Claude Mythos.
Bild genererad av AI

US Treasury warns banks of AI cyberattack risks following Anthropic's Claude Mythos announcement

Rapporterad av AI Bild genererad av AI

In the wake of Anthropic's unveiling of its powerful Claude Mythos AI—capable of detecting and exploiting software vulnerabilities—the US Treasury Secretary has convened top bank executives to highlight escalating AI-driven cyber threats. The move underscores growing concerns as the AI is restricted to a tech coalition via Project Glasswing.

Germany's financial regulator BaFin has warned banks about risks from Anthropic's Claude Mythos AI model, following US Treasury alerts. The model autonomously detects IT vulnerabilities at scale, potentially accelerating cyberattacks. US banks are testing it amid restrictions.

Rapporterad av AI

Anthropic has released a new cyber-focused AI model called Mythos, capable of detecting software flaws faster than humans and generating exploits. The model has raised alarms among governments and companies for potentially turbocharging hacking by exposing vulnerabilities quicker than they can be patched. Officials worldwide are scrambling to assess the risks.

Anthropic has restricted unlimited access to its Claude AI models through third-party agents like OpenClaw, requiring heavy users to pay extra via API keys or usage bundles starting April 4, 2026. The policy shift, announced over the weekend, addresses severe system strain from high-volume agent tools previously covered under $20 monthly subscriptions.

Rapporterad av AI

Anthropic has discovered 14 high-severity security vulnerabilities in Firefox using its new Claude AI tools. The company states that AI enables faster detection of such issues. This finding was reported in a TechRadar article published on March 9, 2026.

After Anthropic CEO Dario Amodei said in late February that the company would not allow its Claude model to be used for mass domestic surveillance or fully autonomous weapons, senior Pentagon officials said they have no intention of using AI for domestic surveillance and insist that private firms cannot set binding limits on how the U.S. military employs AI tools.

Rapporterad av AI

Anthropic has filed a federal lawsuit against the US Department of Defense, challenging its recent label of the AI company as a supply-chain risk. The dispute stems from a contract disagreement over the use of Anthropic's Claude AI for military purposes, including restrictions on mass surveillance and autonomous weapons. The company argues the designation violates free speech and due process rights.

 

 

 

Denna webbplats använder cookies

Vi använder cookies för analys för att förbättra vår webbplats. Läs vår integritetspolicy för mer information.
Avböj