OpenAI users targeted by scam emails and vishing calls

Scammers are sending emails that appear genuine to OpenAI users, designed to manipulate them into revealing critical data swiftly. These emails are followed by vishing calls that intensify the pressure on victims to disclose account details. The campaign highlights ongoing risks in AI platform security.

The scam targeting OpenAI users involves emails crafted to look authentic, as reported in a TechRadar article published on January 25, 2026. These messages exploit the platform's familiarity to trick recipients into providing sensitive information without delay.

According to the description, the emails serve as an initial hook, leading to vishing—voice phishing—calls. In these calls, attackers use urgency and deception to coerce victims into sharing login credentials or other account details. This multi-step approach aims to bypass standard security measures quickly.

OpenAI, a leading AI developer, has not issued a specific response in the available information, but such phishing tactics underscore broader vulnerabilities in tech ecosystems. Users are advised to verify communications through official channels, though no direct preventive steps are detailed here.

The scheme's effectiveness relies on the realism of the emails, making it challenging for individuals to distinguish them from legitimate notifications. This incident adds to the growing list of cyber threats facing AI services, where rapid data extraction can lead to unauthorized access.

As cybersecurity concerns evolve, incidents like this emphasize the need for heightened vigilance among users of cloud-based tools.

관련 기사

Cybersecurity experts are increasingly alarmed by how artificial intelligence is reshaping cybercrime, with tools like deepfakes, AI phishing, and dark large language models enabling even novices to execute advanced scams. These developments pose significant risks to businesses in the coming year. Published insights from TechRadar underscore the scale and sophistication of these emerging threats.

AI에 의해 보고됨

2025년 필리핀의 사이버 위협은 피싱과 랜섬웨어 같은 전통적 방법에 머물렀으며 새로운 형태는 등장하지 않았다. 그러나 인공지능이 이러한 공격의 양과 규모를 증폭시켜 '사이버 범죄의 산업화'를 초래했다. 여러 사이버 보안 업체의 보고서는 사건의 속도, 규모, 빈도의 증가를 강조한다.

Experts have warned that phishing attacks are now appearing in LinkedIn comments. Hackers are exploiting the platform's comment sections to distribute malware. Users are advised to stay vigilant against suspicious links in these interactions.

AI에 의해 보고됨

A growing number of companies are evaluating the security risks associated with artificial intelligence, marking a shift from previous years. This trend indicates heightened awareness among businesses about potential vulnerabilities in AI technologies. The development comes as organizations prioritize protective measures against emerging threats.

 

 

 

이 웹사이트는 쿠키를 사용합니다

사이트를 개선하기 위해 분석을 위한 쿠키를 사용합니다. 자세한 내용은 개인정보 보호 정책을 읽으세요.
거부