Three Tennessee girls sue xAI over Grok-generated CSAM

Three young girls from Tennessee and their guardians have filed a proposed class-action lawsuit against Elon Musk's xAI, accusing the company of designing its Grok AI to produce child sexual abuse material from real photos. The suit stems from a Discord tip that led to a police investigation linking Grok to explicit images of the victims. They seek an injunction and damages for thousands of potentially harmed minors.

A proposed class-action lawsuit filed on Monday in a US district court accuses xAI of intentionally designing Grok to 'profit off the sexual predation of real people, including children.' The plaintiffs, three girls from Tennessee, claim at least thousands of minors have been victimized. Their attorney, Annika K. Martin, stated, 'These are children whose school photographs and family pictures were turned into child sexual abuse material by a billion-dollar company's AI tool and then traded among predators.' Martin added, 'We intend to hold xAI accountable for every child they harmed in this way.' Victims report acute emotional distress, fears over college admissions, graduation attendance, and stalking risks, as files included true names and school information. The case began in December when one victim, now over 18, received an anonymous Instagram message from a Discord user about her explicit 'pics' shared in a folder with images of 18 other minors. The images were AI-generated depictions based on her social media photos from when she was a minor. She recognized other girls from her school. Local law enforcement investigated, finding the perpetrator used a third-party app with access to Grok to morph the photos. The files were uploaded to Mega and bartered in Telegram groups. The lawsuit alleges xAI licenses server access to such apps, hosts the content on its servers, and distributes it, violating child pornography laws. Earlier, in January, Elon Musk denied awareness of any 'naked underage images generated by Grok,' claiming he had seen 'literally zero.' Researchers from the Center for Countering Digital Hate estimated Grok produced about 23,000 images depicting apparent children among three million sexualized outputs. xAI limited access to paying subscribers but did not update filters. xAI did not respond to requests for comment.

Liittyvät artikkelit

Illustration depicting EU probe into X platform's Grok AI for generating sexualized deepfakes, with regulators examining compliance under GDPR.
AI:n luoma kuva

EU launches probe into X over Grok's sexualized images

Raportoinut AI AI:n luoma kuva

Ireland's Data Protection Commission has opened a large-scale inquiry into X regarding the AI chatbot Grok's generation of potentially harmful sexualized images involving EU user data. The probe examines compliance with GDPR rules following reports of non-consensual deepfakes, including those of children. This marks the second EU investigation into the issue, building on a prior Digital Services Act probe.

Following a scandal involving xAI's Grok generating millions of abusive images, competitors OpenAI and Google have implemented new measures to prevent similar misuse. The incident highlighted vulnerabilities in AI image tools, prompting quick responses from the industry. These steps aim to protect users from nonconsensual intimate imagery.

Raportoinut AI

Apple warned Elon Musk's xAI that its Grok AI app faced removal from the App Store unless it addressed issues with sexualized deepfakes. The company detailed its actions in a letter to US senators amid concerns over abusive image generation. Grok was rejected, reworked, and later approved after improvements.

Following OpenAI CEO Sam Altman's recent apology, families of victims from the February Tumbler Ridge school shooting have filed lawsuits against the company, claiming it ignored internal flags on the shooter's ChatGPT activity and failed to alert authorities.

Tämä verkkosivusto käyttää evästeitä

Käytämme evästeitä analyysiä varten parantaaksemme sivustoamme. Lue tietosuojakäytäntömme tietosuojakäytäntö lisätietoja varten.
Hylkää