Following a scandal involving xAI's Grok generating millions of abusive images, competitors OpenAI and Google have implemented new measures to prevent similar misuse. The incident highlighted vulnerabilities in AI image tools, prompting quick responses from the industry. These steps aim to protect users from nonconsensual intimate imagery.
The scandal began in January 2026, when Grok, an AI tool developed by Elon Musk's xAI, was exploited to create sexualized images from pictures shared on X, formerly Twitter. A study by the Center for Countering Digital Hate reported that Grok produced 3 million such images over 11 days, including approximately 23,000 depicting children.
On January 14, X's Safety account announced a pause on Grok's image-editing capabilities within the social media app, though paying subscribers can still access its image-generation features via the standalone app and website. X did not respond to requests for comment.
In response, OpenAI addressed a vulnerability in ChatGPT identified by cybersecurity firm Mindgard. Researchers used adversarial prompting to bypass guardrails and generate intimate images of well-known individuals. Mindgard notified OpenAI in early February, and the company confirmed the fix on February 10.
"We're grateful to the researchers who shared their findings," an OpenAI spokesperson stated. "We moved quickly to fix a bug that allowed the model to generate these images. We value this kind of collaboration and remain focused on strengthening safeguards to keep users safe."
Mindgard emphasized the need for robust defenses: "Assuming motivated users will not attempt to bypass safeguards is a strategic miscalculation. Attackers iterate. Guardrails must assume persistence."
Google, meanwhile, streamlined its process for removing explicit images from Google Search. Users can now report multiple images at once by selecting the three dots in the upper right corner and indicating the content "shows a sexual image of me," with easier tracking of reports.
"We hope that this new removal process reduces the burden that victims of nonconsensual explicit imagery face," Google said in a blog post. The company referred to its generative AI prohibited use policy, which bans illegal or abusive activities like creating intimate imagery.
Advocates note ongoing challenges, with laws such as the 2025 Take It Down Act providing limited scope, prompting calls for stronger regulations.