OpenAI's child exploitation reports surged in early 2025

OpenAI reported a dramatic increase in child exploitation incidents to the National Center for Missing & Exploited Children during the first half of 2025, sending 80 times more reports than in the same period of 2024. The company attributed the rise to expanded moderation capabilities, new features allowing image uploads, and rapid user growth. This spike reflects broader concerns about child safety in generative AI platforms.

OpenAI disclosed a significant uptick in its reporting of child sexual abuse material (CSAM) and other exploitation to the NCMEC's CyberTipline. In the first six months of 2025, the company submitted 75,027 reports covering 74,559 pieces of content, compared to just 947 reports about 3,252 pieces in the first half of 2024.

A spokesperson, Gaby Raila, explained that investments made toward the end of 2024 enhanced OpenAI's ability to review and act on reports amid growing user numbers. "The time frame corresponds to the introduction of more product surfaces that allowed image uploads and the growing popularity of our products, which contributed to the increase in reports," Raila stated. In August 2025, Nick Turley, vice president and head of ChatGPT, noted that the app had quadrupled its weekly active users from the previous year.

OpenAI reports all detected CSAM instances, including uploads and user requests, across its ChatGPT app—which supports file uploads and image generation—and API access. This data excludes reports from the Sora video-generation app, launched in September 2025 after the reporting period.

The trend aligns with NCMEC's observations of a 1,325 percent increase in generative AI-related reports from 2023 to 2024 across all platforms. OpenAI faces heightened scrutiny on child safety, including lawsuits alleging chatbot harms and a US Senate hearing on AI risks. In response, the company introduced parental controls in September 2025, allowing account linking, setting restrictions like disabling image generation, and alerts for self-harm signs. It also agreed with California's Department of Justice in October to mitigate risks to teens and released a Teen Safety Blueprint in November, emphasizing improved CSAM detection and reporting.

Such increases in reports may stem from better detection rather than more incidents, as platforms refine moderation criteria. OpenAI's transparency provides a fuller picture, disclosing both report counts and content volumes.

ተያያዥ ጽሁፎች

Realistic illustration of ChatGPT adult mode screen with flirty text chats, opposed by stern OpenAI advisers, highlighting launch delay concerns.
በ AI የተሰራ ምስል

OpenAI plans ChatGPT adult mode despite adviser warnings

በAI የተዘገበ በ AI የተሰራ ምስል

OpenAI intends to launch a text-only adult mode for ChatGPT, enabling adult-themed conversations but not erotic media, despite unanimous opposition from its wellbeing advisers. The company describes the content as 'smut rather than pornography,' according to a spokesperson cited by The Wall Street Journal. Launch has been delayed from early 2026 amid concerns over minors' access and emotional dependence.

OpenAI has reported a dramatic rise in child exploitation incidents, submitting 80 times more reports to the National Center for Missing & Exploited Children in the first half of 2025 compared to the same period in 2024. This surge highlights growing challenges in content moderation for AI platforms. The reports are channeled through NCMEC's CyberTipline, a key resource for addressing child sexual abuse material.

በAI የተዘገበ

Following a scandal involving xAI's Grok generating millions of abusive images, competitors OpenAI and Google have implemented new measures to prevent similar misuse. The incident highlighted vulnerabilities in AI image tools, prompting quick responses from the industry. These steps aim to protect users from nonconsensual intimate imagery.

A University of Cambridge study on AI-enabled toys like Gabbo reveals they often misinterpret children's emotional cues and disrupt developmental play, despite benefits for language skills. Researchers, led by Jenny Gibson and Emily Goodacre, urge regulation, clear labeling, parental supervision, and collaboration between tech firms and child development experts.

በAI የተዘገበ

Elon Musk's Grok AI generated and shared at least 1.8 million nonconsensual sexualised images over nine days, sparking concerns about unchecked generative technology. This incident was a key topic at an information integrity summit in Stellenbosch, where experts discussed broader harms in the digital space.

xAI has introduced Grok Imagine 1.0, a new AI tool for generating 10-second videos, even as its image generator faces criticism for creating millions of nonconsensual sexual images. Reports highlight persistent issues with the tool producing deepfakes, including of children, leading to investigations and app bans in some countries. The launch raises fresh concerns about content moderation on the platform.

በAI የተዘገበ

Australian regulators are poised to require app stores to block AI services lacking age verification to protect younger users from mature content. This move comes ahead of a March 9 deadline, with potential fines for non-compliant AI companies. Only a fraction of leading AI chat services in the region have implemented such measures.

 

 

 

ይህ ድረ-ገጽ ኩኪዎችን ይጠቀማል

የእኛን ጣቢያ ለማሻሻል ለትንታኔ ኩኪዎችን እንጠቀማለን። የእኛን የሚስጥር ፖሊሲ አንብቡ የሚስጥር ፖሊሲ ለተጨማሪ መረጃ።
ውድቅ አድርግ