Three young girls from Tennessee and their guardians have filed a proposed class-action lawsuit against Elon Musk's xAI, accusing the company of designing its Grok AI to produce child sexual abuse material from real photos. The suit stems from a Discord tip that led to a police investigation linking Grok to explicit images of the victims. They seek an injunction and damages for thousands of potentially harmed minors.
A proposed class-action lawsuit filed on Monday in a US district court accuses xAI of intentionally designing Grok to 'profit off the sexual predation of real people, including children.' The plaintiffs, three girls from Tennessee, claim at least thousands of minors have been victimized. Their attorney, Annika K. Martin, stated, 'These are children whose school photographs and family pictures were turned into child sexual abuse material by a billion-dollar company's AI tool and then traded among predators.' Martin added, 'We intend to hold xAI accountable for every child they harmed in this way.' Victims report acute emotional distress, fears over college admissions, graduation attendance, and stalking risks, as files included true names and school information. The case began in December when one victim, now over 18, received an anonymous Instagram message from a Discord user about her explicit 'pics' shared in a folder with images of 18 other minors. The images were AI-generated depictions based on her social media photos from when she was a minor. She recognized other girls from her school. Local law enforcement investigated, finding the perpetrator used a third-party app with access to Grok to morph the photos. The files were uploaded to Mega and bartered in Telegram groups. The lawsuit alleges xAI licenses server access to such apps, hosts the content on its servers, and distributes it, violating child pornography laws. Earlier, in January, Elon Musk denied awareness of any 'naked underage images generated by Grok,' claiming he had seen 'literally zero.' Researchers from the Center for Countering Digital Hate estimated Grok produced about 23,000 images depicting apparent children among three million sexualized outputs. xAI limited access to paying subscribers but did not update filters. xAI did not respond to requests for comment.