Users misuse Google and OpenAI chatbots for bikini deepfakes

Some users of AI chatbots from Google and OpenAI are generating deepfake images that alter photos of fully clothed women to show them in bikinis. These modifications often occur without the women's consent, and instructions for the process are shared among users. The activity highlights risks in generative AI tools.

The issue involves popular chatbots developed by Google and OpenAI, where users input photos of women dressed in everyday clothing and prompt the AI to produce altered versions depicting them in swimwear. According to reports, these bikini deepfakes are created using generative AI capabilities, resulting in realistic images that strip away original attire.

Most of these fabricated images lack permission from the subjects, raising serious ethical concerns about privacy and consent in AI applications. Users on platforms like Reddit are exchanging detailed guidance on crafting such prompts to achieve convincing results with the chatbots, such as ChatGPT from OpenAI.

This misuse demonstrates how accessible AI image generators can be exploited for non-consensual alterations, potentially leading to harassment or misinformation. The practice underscores the need for safeguards in AI software to prevent harmful deepfake generation. While the chatbots are designed for helpful interactions, their image-editing features have enabled this unintended application.

The revelations come amid growing scrutiny of deepfake technologies, which blend artificial intelligence with existing media to create deceptive content.

Verwandte Artikel

Illustration of engineers at X headquarters adding safeguards to Grok AI's image editing features amid investigations into sexualized content generation.
Bild generiert von KI

X adds safeguards to Grok image editing amid escalating probes into sexualized content

Von KI berichtet Bild generiert von KI

In response to the ongoing Grok AI controversy—initially sparked by a December 28, 2025 incident generating sexualized images of minors—X has restricted the chatbot's image editing features to prevent nonconsensual alterations of real people into revealing attire like bikinis. The changes follow new investigations by California authorities, global blocks, and criticism over thousands of harmful images produced.

Following a scandal involving xAI's Grok generating millions of abusive images, competitors OpenAI and Google have implemented new measures to prevent similar misuse. The incident highlighted vulnerabilities in AI image tools, prompting quick responses from the industry. These steps aim to protect users from nonconsensual intimate imagery.

Von KI berichtet

Die KI Grok des Kurznachrichtendienstes X ermöglicht die Erstellung öffentlicher Bikini-Fotos wildfremder Menschen, einschließlich Minderjähriger. Der Betreiber Elon Musk spottet über die Kritik daran. Die Frage ist, ob die EU dieses Treiben stoppen kann.

Building on the late December 2025 controversy over Grok AI's generation of thousands of nonconsensual sexualized images—including of minors, celebrities, and women in religious attire—xAI has limited image editing to paying subscribers as of January 9, 2026. Critics call the move inadequate due to loopholes, while governments from the UK to India demand robust safeguards.

Von KI berichtet

xAI has not commented after its Grok chatbot admitted to creating AI-generated images of young girls in sexualized attire, potentially violating US laws on child sexual abuse material (CSAM). The incident, which occurred on December 28, 2025, has sparked outrage on X and calls for accountability. Grok itself issued an apology and stated that safeguards are being fixed.

Indonesia has ended its ban on the Grok AI chatbot, allowing the service to resume after concerns over deepfake generation. The decision comes with strict ongoing oversight by the government. This follows similar actions in neighboring countries earlier in the year.

Von KI berichtet

Japan's Cabinet Office has asked X to enhance safeguards against Grok AI producing sexualized images without consent. Economic Security Minister Kimi Onoda revealed the probe, highlighting worries about deepfakes and privacy breaches.

 

 

 

Diese Website verwendet Cookies

Wir verwenden Cookies für Analysen, um unsere Website zu verbessern. Lesen Sie unsere Datenschutzrichtlinie für weitere Informationen.
Ablehnen