Læk af Claude Mythos øger cybertrusler mod banker

Anthropics nyeste AI-model, Claude Mythos, er lækket trods en vurdering af, at den var for farlig til offentliggørelse. Finansielle institutioner står nu over for avancerede, AI-drevne angreb, der er i stand til at udnytte ukendte sårbarheder.

Anthropic beskriver Claude Mythos som yderst risikabel. Modellen er allerede blevet tilgængelig uden for virksomheden og kan identificere samt udnytte ukendte sikkerhedshuller, når den instrueres i det. Banker er særligt sårbare, fordi de driver komplekse it-systemer, der kombinerer gamle og nye komponenter. Professor Co-Pierre Georg fra Frankfurt School of Finance peger på tre hovedtrusler: tab af følsomme kundedata, afbrydelse af kritiske systemer og et efterfølgende tab af tillid. Det tyske finansministerium overvåger udviklingen tæt. De opretholder tæt kontakt med finanstilsynet, den finansielle sektor samt europæiske og internationale partnere for at forberede sig på potentielle risici.

Relaterede artikler

Illustration of US Treasury Secretary warning bank executives about AI cyberattack risks from Anthropic's Claude Mythos.
Billede genereret af AI

US Treasury warns banks of AI cyberattack risks following Anthropic's Claude Mythos announcement

Rapporteret af AI Billede genereret af AI

In the wake of Anthropic's unveiling of its powerful Claude Mythos AI—capable of detecting and exploiting software vulnerabilities—the US Treasury Secretary has convened top bank executives to highlight escalating AI-driven cyber threats. The move underscores growing concerns as the AI is restricted to a tech coalition via Project Glasswing.

Tysklands finansielle tilsynsmyndighed, BaFin, har advaret banker om risici forbundet med Anthropics Claude Mythos-AI-model efter advarsler fra det amerikanske finansministerium. Modellen kan autonomt identificere it-sårbarheder i stor skala, hvilket potentielt kan fremskynde cyberangreb. Amerikanske banker tester modellen under begrænsninger.

Rapporteret af AI

Anthropic has released a new cyber-focused AI model called Mythos, capable of detecting software flaws faster than humans and generating exploits. The model has raised alarms among governments and companies for potentially turbocharging hacking by exposing vulnerabilities quicker than they can be patched. Officials worldwide are scrambling to assess the risks.

Anthropic has restricted unlimited access to its Claude AI models through third-party agents like OpenClaw, requiring heavy users to pay extra via API keys or usage bundles starting April 4, 2026. The policy shift, announced over the weekend, addresses severe system strain from high-volume agent tools previously covered under $20 monthly subscriptions.

Rapporteret af AI

Anthropic has discovered 14 high-severity security vulnerabilities in Firefox using its new Claude AI tools. The company states that AI enables faster detection of such issues. This finding was reported in a TechRadar article published on March 9, 2026.

After Anthropic CEO Dario Amodei said in late February that the company would not allow its Claude model to be used for mass domestic surveillance or fully autonomous weapons, senior Pentagon officials said they have no intention of using AI for domestic surveillance and insist that private firms cannot set binding limits on how the U.S. military employs AI tools.

Rapporteret af AI

Anthropic has filed a federal lawsuit against the US Department of Defense, challenging its recent label of the AI company as a supply-chain risk. The dispute stems from a contract disagreement over the use of Anthropic's Claude AI for military purposes, including restrictions on mass surveillance and autonomous weapons. The company argues the designation violates free speech and due process rights.

 

 

 

Dette websted bruger cookies

Vi bruger cookies til analyse for at forbedre vores side. Læs vores privatlivspolitik for mere information.
Afvis