Scammers are sending emails that appear genuine to OpenAI users, designed to manipulate them into revealing critical data swiftly. These emails are followed by vishing calls that intensify the pressure on victims to disclose account details. The campaign highlights ongoing risks in AI platform security.
The scam targeting OpenAI users involves emails crafted to look authentic, as reported in a TechRadar article published on January 25, 2026. These messages exploit the platform's familiarity to trick recipients into providing sensitive information without delay.
According to the description, the emails serve as an initial hook, leading to vishing—voice phishing—calls. In these calls, attackers use urgency and deception to coerce victims into sharing login credentials or other account details. This multi-step approach aims to bypass standard security measures quickly.
OpenAI, a leading AI developer, has not issued a specific response in the available information, but such phishing tactics underscore broader vulnerabilities in tech ecosystems. Users are advised to verify communications through official channels, though no direct preventive steps are detailed here.
The scheme's effectiveness relies on the realism of the emails, making it challenging for individuals to distinguish them from legitimate notifications. This incident adds to the growing list of cyber threats facing AI services, where rapid data extraction can lead to unauthorized access.
As cybersecurity concerns evolve, incidents like this emphasize the need for heightened vigilance among users of cloud-based tools.