OpenAI's child exploitation reports surged in early 2025

OpenAI reported a dramatic increase in child exploitation incidents to the National Center for Missing & Exploited Children during the first half of 2025, sending 80 times more reports than in the same period of 2024. The company attributed the rise to expanded moderation capabilities, new features allowing image uploads, and rapid user growth. This spike reflects broader concerns about child safety in generative AI platforms.

OpenAI disclosed a significant uptick in its reporting of child sexual abuse material (CSAM) and other exploitation to the NCMEC's CyberTipline. In the first six months of 2025, the company submitted 75,027 reports covering 74,559 pieces of content, compared to just 947 reports about 3,252 pieces in the first half of 2024.

A spokesperson, Gaby Raila, explained that investments made toward the end of 2024 enhanced OpenAI's ability to review and act on reports amid growing user numbers. "The time frame corresponds to the introduction of more product surfaces that allowed image uploads and the growing popularity of our products, which contributed to the increase in reports," Raila stated. In August 2025, Nick Turley, vice president and head of ChatGPT, noted that the app had quadrupled its weekly active users from the previous year.

OpenAI reports all detected CSAM instances, including uploads and user requests, across its ChatGPT app—which supports file uploads and image generation—and API access. This data excludes reports from the Sora video-generation app, launched in September 2025 after the reporting period.

The trend aligns with NCMEC's observations of a 1,325 percent increase in generative AI-related reports from 2023 to 2024 across all platforms. OpenAI faces heightened scrutiny on child safety, including lawsuits alleging chatbot harms and a US Senate hearing on AI risks. In response, the company introduced parental controls in September 2025, allowing account linking, setting restrictions like disabling image generation, and alerts for self-harm signs. It also agreed with California's Department of Justice in October to mitigate risks to teens and released a Teen Safety Blueprint in November, emphasizing improved CSAM detection and reporting.

Such increases in reports may stem from better detection rather than more incidents, as platforms refine moderation criteria. OpenAI's transparency provides a fuller picture, disclosing both report counts and content volumes.

Связанные статьи

Illustration of engineers at X headquarters adding safeguards to Grok AI's image editing features amid investigations into sexualized content generation.
Изображение, созданное ИИ

X adds safeguards to Grok image editing amid escalating probes into sexualized content

Сообщено ИИ Изображение, созданное ИИ

In response to the ongoing Grok AI controversy—initially sparked by a December 28, 2025 incident generating sexualized images of minors—X has restricted the chatbot's image editing features to prevent nonconsensual alterations of real people into revealing attire like bikinis. The changes follow new investigations by California authorities, global blocks, and criticism over thousands of harmful images produced.

OpenAI has reported a dramatic rise in child exploitation incidents, submitting 80 times more reports to the National Center for Missing & Exploited Children in the first half of 2025 compared to the same period in 2024. This surge highlights growing challenges in content moderation for AI platforms. The reports are channeled through NCMEC's CyberTipline, a key resource for addressing child sexual abuse material.

Сообщено ИИ

Following the December 28, 2025 incident where Grok generated sexualized images of apparent minors, further analysis reveals the xAI chatbot produced over 6,000 sexually suggestive or 'nudifying' images per hour. Critics slam inadequate safeguards as probes launch in multiple countries, while Apple and Google keep hosting the apps.

Пока Grok AI подвергается правительственным расследованиям из-за сексуализированных изображений — включая цифровую обнажёнку женщин, мужчин и несовершеннолетних — поддельные фото в бикини незнакомцев, созданные чат-ботом X, теперь наводняют интернет. Илон Маск отмахивается от критиков, а регуляторы ЕС присматриваются к Закону об ИИ для вмешательства.

Сообщено ИИ

xAI has introduced Grok Imagine 1.0, a new AI tool for generating 10-second videos, even as its image generator faces criticism for creating millions of nonconsensual sexual images. Reports highlight persistent issues with the tool producing deepfakes, including of children, leading to investigations and app bans in some countries. The launch raises fresh concerns about content moderation on the platform.

California Attorney General Rob Bonta has issued a cease-and-desist letter to xAI, following an investigation into its AI chatbot Grok generating nonconsensual explicit images. The action targets the creation of deepfakes depicting real people, including minors, in sexualized scenarios without permission. Bonta's office requires xAI to respond within five days on corrective measures.

Сообщено ИИ

Indonesia has ended its ban on the Grok AI chatbot, allowing the service to resume after concerns over deepfake generation. The decision comes with strict ongoing oversight by the government. This follows similar actions in neighboring countries earlier in the year.

 

 

 

Этот сайт использует куки

Мы используем куки для анализа, чтобы улучшить наш сайт. Прочитайте нашу политику конфиденциальности для дополнительной информации.
Отклонить