Fake Chrome AI extensions targeted over 300,000 users

Criminals have distributed fake AI extensions in the Google Chrome Web Store to target more than 300,000 users. These tools aim to steal emails, personal data, and other information. The issue highlights ongoing efforts to push surveillance software through legitimate channels.

The fake Chrome AI extensions have affected over 300,000 users by attempting to harvest sensitive information such as emails and personal data. According to reports, criminals are exploiting the Google Chrome Web Store to distribute these surveillance tools disguised as legitimate AI features.

This development underscores vulnerabilities in browser extension ecosystems, where malicious actors can blend harmful software with everyday productivity aids. The extensions were designed to collect data covertly, posing risks to user privacy and security.

Details emerged in a TechRadar article published on February 13, 2026, which outlined the scale of the targeting and the methods involved. No specific timeline for the campaign's start was provided, but the focus remains on the Web Store as a distribution point.

Users are advised to review and remove suspicious extensions, though broader implications for platform security were not detailed in available information.

Verwandte Artikel

Google has introduced new defenses against prompt injection in its Chrome browser. The update features an AI system designed to monitor the activities of other AIs.

Von KI berichtet

Scammers are sending emails that appear genuine to OpenAI users, designed to manipulate them into revealing critical data swiftly. These emails are followed by vishing calls that intensify the pressure on victims to disclose account details. The campaign highlights ongoing risks in AI platform security.

Google has rolled out new Gemini AI tools for its Chrome browser, including a sidebar for multitasking and an integrated image generator. The updates also preview an 'Auto Browse' agent to automate web tasks. These enhancements aim to make browsing more personalized and efficient.

Von KI berichtet

IBM's artificial intelligence tool, known as Bob, has been found susceptible to manipulation that could lead to downloading and executing malware. Researchers highlight its vulnerability to indirect prompt injection attacks. The findings were reported by TechRadar on January 9, 2026.

Donnerstag, 26. Februar 2026, 14:34 Uhr

Hackers use AI to exploit security flaws faster, IBM finds

Mittwoch, 25. Februar 2026, 15:18 Uhr

New cybercrime platform 1Campaign aids malicious Google ads

Dienstag, 24. Februar 2026, 10:43 Uhr

OpenAI and Google bolster AI safeguards after Grok image scandal

Dienstag, 17. Februar 2026, 10:18 Uhr

OpenClaw AI agents targeted by infostealer malware for first time

Sonntag, 15. Februar 2026, 09:14 Uhr

Mozilla introduces optional AI features in Firefox update

Mittwoch, 11. Februar 2026, 12:13 Uhr

North Korean hackers use AI video to spread malware

Montag, 02. Februar 2026, 00:15 Uhr

Report uncovers data leaks in android ai apps

Samstag, 24. Januar 2026, 06:44 Uhr

Experts highlight AI threats like deepfakes and dark LLMs in cybercrime

Sonntag, 18. Januar 2026, 01:24 Uhr

AI companies gear up for ads as manipulation threats emerge

Donnerstag, 11. Dezember 2025, 16:50 Uhr

AI scales up cyber attacks in 2025

 

 

 

Diese Website verwendet Cookies

Wir verwenden Cookies für Analysen, um unsere Website zu verbessern. Lesen Sie unsere Datenschutzrichtlinie für weitere Informationen.
Ablehnen