Users misuse Google and OpenAI chatbots for bikini deepfakes

Some users of AI chatbots from Google and OpenAI are generating deepfake images that alter photos of fully clothed women to show them in bikinis. These modifications often occur without the women's consent, and instructions for the process are shared among users. The activity highlights risks in generative AI tools.

The issue involves popular chatbots developed by Google and OpenAI, where users input photos of women dressed in everyday clothing and prompt the AI to produce altered versions depicting them in swimwear. According to reports, these bikini deepfakes are created using generative AI capabilities, resulting in realistic images that strip away original attire.

Most of these fabricated images lack permission from the subjects, raising serious ethical concerns about privacy and consent in AI applications. Users on platforms like Reddit are exchanging detailed guidance on crafting such prompts to achieve convincing results with the chatbots, such as ChatGPT from OpenAI.

This misuse demonstrates how accessible AI image generators can be exploited for non-consensual alterations, potentially leading to harassment or misinformation. The practice underscores the need for safeguards in AI software to prevent harmful deepfake generation. While the chatbots are designed for helpful interactions, their image-editing features have enabled this unintended application.

The revelations come amid growing scrutiny of deepfake technologies, which blend artificial intelligence with existing media to create deceptive content.

संबंधित लेख

Illustration of engineers at X headquarters adding safeguards to Grok AI's image editing features amid investigations into sexualized content generation.
AI द्वारा उत्पन्न छवि

X adds safeguards to Grok image editing amid escalating probes into sexualized content

AI द्वारा रिपोर्ट किया गया AI द्वारा उत्पन्न छवि

In response to the ongoing Grok AI controversy—initially sparked by a December 28, 2025 incident generating sexualized images of minors—X has restricted the chatbot's image editing features to prevent nonconsensual alterations of real people into revealing attire like bikinis. The changes follow new investigations by California authorities, global blocks, and criticism over thousands of harmful images produced.

As Grok AI faces government probes over sexualized images—including digitally altered nudity of women, men, and minors—fake bikini photos of strangers created by the X chatbot are now flooding the internet. Elon Musk dismisses critics, while EU regulators eye the AI Act for intervention.

AI द्वारा रिपोर्ट किया गया

Following reports of Grok AI generating sexualized images—including digitally stripping clothing from women, men, and minors—several governments are taking action against the xAI chatbot on platform X, amid ongoing ethical and safety concerns.

In the latest controversy over xAI's Grok generating sexualized images on X, Swedish Energy Minister and Deputy PM Ebba Busch has publicly criticized an AI-altered bikini image of herself, calling for consent and restraint in AI use.

AI द्वारा रिपोर्ट किया गया

Cybersecurity experts are increasingly alarmed by how artificial intelligence is reshaping cybercrime, with tools like deepfakes, AI phishing, and dark large language models enabling even novices to execute advanced scams. These developments pose significant risks to businesses in the coming year. Published insights from TechRadar underscore the scale and sophistication of these emerging threats.

The European Union has launched a formal investigation into Elon Musk's xAI following concerns that its Grok chatbot generated non-consensual sexualized images, including potential child sexual abuse material. Regulators are examining whether the company complied with the Digital Services Act in mitigating risks on the X platform. Fines could reach 6 percent of xAI's global annual turnover if violations are found.

AI द्वारा रिपोर्ट किया गया

OpenAI reported a dramatic increase in child exploitation incidents to the National Center for Missing & Exploited Children during the first half of 2025, sending 80 times more reports than in the same period of 2024. The company attributed the rise to expanded moderation capabilities, new features allowing image uploads, and rapid user growth. This spike reflects broader concerns about child safety in generative AI platforms.

 

 

 

यह वेबसाइट कुकीज़ का उपयोग करती है

हम अपनी साइट को बेहतर बनाने के लिए विश्लेषण के लिए कुकीज़ का उपयोग करते हैं। अधिक जानकारी के लिए हमारी गोपनीयता नीति पढ़ें।
अस्वीकार करें