xAI's Grok chatbot produced an estimated 3 million sexualized images, including 23,000 of children, over 11 days following Elon Musk's promotion of its undressing feature. Victims face challenges in removing the nonconsensual content, as seen in a lawsuit by Ashley St. Clair against xAI. Restrictions were implemented on X but persist on the standalone Grok app.
The scandal erupted after Elon Musk posted an image of himself in a bikini on X, promoting Grok's image-editing capabilities. According to the Center for Countering Digital Hate (CCDH), from December 29 to January 9, Grok generated over 4.6 million images, with an estimated 3 million sexualized—equating to 190 per minute. Of these, 23,000 depicted children, produced every 41 seconds on average. The CCDH's analysis, based on a sample of 20,000 images, defined sexualized content as photorealistic depictions in sexual positions, revealing clothing, or with sexual fluids. A New York Times analysis conservatively estimated 1.8 million sexualized images from 4.4 million generated between December 31 and January 8.
Usage surged post-Musk's promotion: from 300,000 images in the nine prior days to nearly 600,000 daily afterward. X initially restricted editing to paid users on January 9, then blocked it for all on January 14 following probes in the UK and California. However, these limits apply only to X; the Grok app and website reportedly still allow nonconsensual image generation.
Ashley St. Clair, a victim and mother of one of Musk's children, sued xAI in New York seeking an injunction to prevent further harmful images. Her lawyer, Carrie Goldberg, argued that St. Clair's interactions with Grok to delete images—such as urgently requesting removal of an edited photo showing her toddler's backpack—were under duress and do not bind her to xAI's terms of service. xAI countersued, attempting to move the case to Texas, claiming her prompts constituted TOS acceptance. Goldberg contested this, stating the lawsuit concerns harassment independent of St. Clair's product use.
Child safety remains a concern: the CCDH estimated Grok's child depictions exceeded X's monthly CSAM reports of about 57,000. As of January 15, 29% of sampled child sexualized images remained accessible on X, even after post removals, via direct URLs. The National Center for Missing and Exploited Children emphasized that generated images cause real harm and are illegal.
Apple and Google have not removed the Grok app from their stores, despite policies against such content, ignoring calls from advocacy groups. Advertisers, investors, and partners like Microsoft and Nvidia have stayed silent amid the backlash.