OpenAI users targeted by scam emails and vishing calls

Scammers are sending emails that appear genuine to OpenAI users, designed to manipulate them into revealing critical data swiftly. These emails are followed by vishing calls that intensify the pressure on victims to disclose account details. The campaign highlights ongoing risks in AI platform security.

The scam targeting OpenAI users involves emails crafted to look authentic, as reported in a TechRadar article published on January 25, 2026. These messages exploit the platform's familiarity to trick recipients into providing sensitive information without delay.

According to the description, the emails serve as an initial hook, leading to vishing—voice phishing—calls. In these calls, attackers use urgency and deception to coerce victims into sharing login credentials or other account details. This multi-step approach aims to bypass standard security measures quickly.

OpenAI, a leading AI developer, has not issued a specific response in the available information, but such phishing tactics underscore broader vulnerabilities in tech ecosystems. Users are advised to verify communications through official channels, though no direct preventive steps are detailed here.

The scheme's effectiveness relies on the realism of the emails, making it challenging for individuals to distinguish them from legitimate notifications. This incident adds to the growing list of cyber threats facing AI services, where rapid data extraction can lead to unauthorized access.

As cybersecurity concerns evolve, incidents like this emphasize the need for heightened vigilance among users of cloud-based tools.

Verwandte Artikel

Cybersecurity experts are increasingly alarmed by how artificial intelligence is reshaping cybercrime, with tools like deepfakes, AI phishing, and dark large language models enabling even novices to execute advanced scams. These developments pose significant risks to businesses in the coming year. Published insights from TechRadar underscore the scale and sophistication of these emerging threats.

Von KI berichtet

Criminals have distributed fake AI extensions in the Google Chrome Web Store to target more than 300,000 users. These tools aim to steal emails, personal data, and other information. The issue highlights ongoing efforts to push surveillance software through legitimate channels.

Infostealer malware has targeted OpenClaw AI agents for the first time, according to a TechRadar report. The incident highlights vulnerabilities in locally deployed AI systems that store sensitive information. The article was published on February 17, 2026.

Von KI berichtet

A recent scan of millions of Android apps has revealed significant data leaks from AI software, exceeding expectations in scale. Hardcoded secrets persist in most Android AI applications today. The findings highlight ongoing privacy risks in mobile technology.

Donnerstag, 19. März 2026, 04:05 Uhr

Three high-risk AI vulnerabilities discovered in Claude.ai

Mittwoch, 11. März 2026, 06:28 Uhr

Meta announces new scam protection features

Mittwoch, 04. März 2026, 09:00 Uhr

TechRadar: Hackers Use Easy AI Tools for Quicker Cyber Attacks

Freitag, 06. Februar 2026, 08:30 Uhr

Gatchalian warns against AI ‘love scams’

Dienstag, 27. Januar 2026, 16:17 Uhr

AI-based anti-phishing platform prevents 19 billion won in financial damage

Montag, 26. Januar 2026, 00:51 Uhr

Hackers are using LLMs to build next-generation phishing attacks

Freitag, 23. Januar 2026, 19:20 Uhr

Custom vishing kits target SSO accounts worldwide

Sonntag, 18. Januar 2026, 01:24 Uhr

AI companies gear up for ads as manipulation threats emerge

Dienstag, 23. Dezember 2025, 17:50 Uhr

Users misuse Google and OpenAI chatbots for bikini deepfakes

Donnerstag, 11. Dezember 2025, 16:50 Uhr

AI scales up cyber attacks in 2025

 

 

 

Diese Website verwendet Cookies

Wir verwenden Cookies für Analysen, um unsere Website zu verbessern. Lesen Sie unsere Datenschutzrichtlinie für weitere Informationen.
Ablehnen