Three high-risk AI vulnerabilities discovered in Claude.ai

Researchers have identified three high-risk vulnerabilities in Claude.ai. These enable an end-to-end attack chain that exfiltrates sensitive information without the user's knowledge. A legitimate Google ad could trigger data exfiltration.

TechRadar reported on March 19, 2026, the discovery of three high-risk AI vulnerabilities in Claude.ai. The flaws form an end-to-end attack chain capable of exfiltrating sensitive information without the user knowing. Notably, a legitimate Google ad could lead to such data exfiltration. This security issue highlights risks in AI systems where external elements like ads can compromise user data.

관련 기사

Illustration of Anthropic restricting Claude Mythos AI and launching Project Glasswing consortium with tech giants to address cybersecurity vulnerabilities.
AI에 의해 생성된 이미지

Anthropic restricts Claude Mythos AI release and launches Project Glasswing over cybersecurity risks

AI에 의해 보고됨 AI에 의해 생성된 이미지

Anthropic has limited access to its Claude Mythos Preview AI model due to its superior ability to detect and exploit software vulnerabilities, while launching Project Glasswing—a consortium with over 45 tech firms including Apple, Google, and Microsoft—to collaboratively patch flaws and bolster defenses. The announcement follows recent data leaks at the firm.

Cybersecurity researchers have identified a fraudulent website mimicking the popular AI tool Claude that delivers backdoor malware to visitors. The discovery highlights how cybercriminals are capitalizing on growing interest in artificial intelligence platforms.

AI에 의해 보고됨

Anthropic's latest AI model Claude Mythos has leaked despite being deemed too dangerous for public release. Financial institutions now face advanced AI-powered attacks capable of exploiting unknown vulnerabilities.

Infostealer malware has targeted OpenClaw AI agents for the first time, according to a TechRadar report. The incident highlights vulnerabilities in locally deployed AI systems that store sensitive information. The article was published on February 17, 2026.

AI에 의해 보고됨

Anthropic has restricted unlimited access to its Claude AI models through third-party agents like OpenClaw, requiring heavy users to pay extra via API keys or usage bundles starting April 4, 2026. The policy shift, announced over the weekend, addresses severe system strain from high-volume agent tools previously covered under $20 monthly subscriptions.

Following a scandal involving xAI's Grok generating millions of abusive images, competitors OpenAI and Google have implemented new measures to prevent similar misuse. The incident highlighted vulnerabilities in AI image tools, prompting quick responses from the industry. These steps aim to protect users from nonconsensual intimate imagery.

AI에 의해 보고됨

Germany's financial regulator BaFin has warned banks about risks from Anthropic's Claude Mythos AI model, following US Treasury alerts. The model autonomously detects IT vulnerabilities at scale, potentially accelerating cyberattacks. US banks are testing it amid restrictions.

 

 

 

이 웹사이트는 쿠키를 사용합니다

사이트를 개선하기 위해 분석을 위한 쿠키를 사용합니다. 자세한 내용은 개인정보 보호 정책을 읽으세요.
거부