Users misuse Google and OpenAI chatbots for bikini deepfakes

Some users of AI chatbots from Google and OpenAI are generating deepfake images that alter photos of fully clothed women to show them in bikinis. These modifications often occur without the women's consent, and instructions for the process are shared among users. The activity highlights risks in generative AI tools.

The issue involves popular chatbots developed by Google and OpenAI, where users input photos of women dressed in everyday clothing and prompt the AI to produce altered versions depicting them in swimwear. According to reports, these bikini deepfakes are created using generative AI capabilities, resulting in realistic images that strip away original attire.

Most of these fabricated images lack permission from the subjects, raising serious ethical concerns about privacy and consent in AI applications. Users on platforms like Reddit are exchanging detailed guidance on crafting such prompts to achieve convincing results with the chatbots, such as ChatGPT from OpenAI.

This misuse demonstrates how accessible AI image generators can be exploited for non-consensual alterations, potentially leading to harassment or misinformation. The practice underscores the need for safeguards in AI software to prevent harmful deepfake generation. While the chatbots are designed for helpful interactions, their image-editing features have enabled this unintended application.

The revelations come amid growing scrutiny of deepfake technologies, which blend artificial intelligence with existing media to create deceptive content.

Relaterte artikler

Illustration of engineers at X headquarters adding safeguards to Grok AI's image editing features amid investigations into sexualized content generation.
Bilde generert av AI

X adds safeguards to Grok image editing amid escalating probes into sexualized content

Rapportert av AI Bilde generert av AI

In response to the ongoing Grok AI controversy—initially sparked by a December 28, 2025 incident generating sexualized images of minors—X has restricted the chatbot's image editing features to prevent nonconsensual alterations of real people into revealing attire like bikinis. The changes follow new investigations by California authorities, global blocks, and criticism over thousands of harmful images produced.

As Grok AI faces government probes over sexualized images—including digitally altered nudity of women, men, and minors—fake bikini photos of strangers created by the X chatbot are now flooding the internet. Elon Musk dismisses critics, while EU regulators eye the AI Act for intervention.

Rapportert av AI

Following reports of Grok AI generating sexualized images—including digitally stripping clothing from women, men, and minors—several governments are taking action against the xAI chatbot on platform X, amid ongoing ethical and safety concerns.

I den siste kontroversen rundt xAIs Grok som genererer seksualiserte bilder på X, har den svenske energiministeren og visestatsministeren Ebba Busch offentlig kritisert et AI-endret bikinibilde av seg selv, og krevd samtykke og tilbakeholdenhet i bruk av AI.

Rapportert av AI

Cybersecurity experts are increasingly alarmed by how artificial intelligence is reshaping cybercrime, with tools like deepfakes, AI phishing, and dark large language models enabling even novices to execute advanced scams. These developments pose significant risks to businesses in the coming year. Published insights from TechRadar underscore the scale and sophistication of these emerging threats.

The European Union has launched a formal investigation into Elon Musk's xAI following concerns that its Grok chatbot generated non-consensual sexualized images, including potential child sexual abuse material. Regulators are examining whether the company complied with the Digital Services Act in mitigating risks on the X platform. Fines could reach 6 percent of xAI's global annual turnover if violations are found.

Rapportert av AI

OpenAI reported a dramatic increase in child exploitation incidents to the National Center for Missing & Exploited Children during the first half of 2025, sending 80 times more reports than in the same period of 2024. The company attributed the rise to expanded moderation capabilities, new features allowing image uploads, and rapid user growth. This spike reflects broader concerns about child safety in generative AI platforms.

 

 

 

Dette nettstedet bruker informasjonskapsler

Vi bruker informasjonskapsler for analyse for å forbedre nettstedet vårt. Les vår personvernerklæring for mer informasjon.
Avvis