OpenAI reported a dramatic increase in child exploitation incidents to the National Center for Missing & Exploited Children during the first half of 2025, sending 80 times more reports than in the same period of 2024. The company attributed the rise to expanded moderation capabilities, new features allowing image uploads, and rapid user growth. This spike reflects broader concerns about child safety in generative AI platforms.
OpenAI disclosed a significant uptick in its reporting of child sexual abuse material (CSAM) and other exploitation to the NCMEC's CyberTipline. In the first six months of 2025, the company submitted 75,027 reports covering 74,559 pieces of content, compared to just 947 reports about 3,252 pieces in the first half of 2024.
A spokesperson, Gaby Raila, explained that investments made toward the end of 2024 enhanced OpenAI's ability to review and act on reports amid growing user numbers. "The time frame corresponds to the introduction of more product surfaces that allowed image uploads and the growing popularity of our products, which contributed to the increase in reports," Raila stated. In August 2025, Nick Turley, vice president and head of ChatGPT, noted that the app had quadrupled its weekly active users from the previous year.
OpenAI reports all detected CSAM instances, including uploads and user requests, across its ChatGPT app—which supports file uploads and image generation—and API access. This data excludes reports from the Sora video-generation app, launched in September 2025 after the reporting period.
The trend aligns with NCMEC's observations of a 1,325 percent increase in generative AI-related reports from 2023 to 2024 across all platforms. OpenAI faces heightened scrutiny on child safety, including lawsuits alleging chatbot harms and a US Senate hearing on AI risks. In response, the company introduced parental controls in September 2025, allowing account linking, setting restrictions like disabling image generation, and alerts for self-harm signs. It also agreed with California's Department of Justice in October to mitigate risks to teens and released a Teen Safety Blueprint in November, emphasizing improved CSAM detection and reporting.
Such increases in reports may stem from better detection rather than more incidents, as platforms refine moderation criteria. OpenAI's transparency provides a fuller picture, disclosing both report counts and content volumes.