OpenAI sharply increases child exploitation reports to NCMEC

OpenAI has reported a dramatic rise in child exploitation incidents, submitting 80 times more reports to the National Center for Missing & Exploited Children in the first half of 2025 compared to the same period in 2024. This surge highlights growing challenges in content moderation for AI platforms. The reports are channeled through NCMEC's CyberTipline, a key resource for addressing child sexual abuse material.

In a recent update, OpenAI disclosed a significant escalation in its detection and reporting of child exploitation cases. During the first six months of 2025, the company forwarded 80 times as many such incident reports to the National Center for Missing & Exploited Children (NCMEC) as it did in the equivalent timeframe of 2024. This marked increase underscores the intensifying efforts by tech firms to combat online harms involving minors.

The NCMEC's CyberTipline serves as a congressionally authorized hub for receiving tips on child sexual abuse material (CSAM) and other exploitation forms. Established to streamline responses to these threats, it relies on contributions from companies like OpenAI, which use AI-driven tools to scan and flag suspicious content on their platforms. While specific numbers of reports were not detailed in the update, the multiplier effect—80-fold—signals a potential rise in the prevalence of such material or improved detection capabilities.

OpenAI's work in this area aligns with broader industry trends toward enhanced safety measures for chatbots and generative AI systems. Keywords associated with the report include safety, kids, regulation, and content moderation, reflecting ongoing discussions about AI's role in protecting vulnerable users. As AI technologies evolve, these reporting mechanisms play a crucial role in supporting law enforcement and prevention efforts against child exploitation.

관련 기사

Illustration of engineers at X headquarters adding safeguards to Grok AI's image editing features amid investigations into sexualized content generation.
AI에 의해 생성된 이미지

X adds safeguards to Grok image editing amid escalating probes into sexualized content

AI에 의해 보고됨 AI에 의해 생성된 이미지

In response to the ongoing Grok AI controversy—initially sparked by a December 28, 2025 incident generating sexualized images of minors—X has restricted the chatbot's image editing features to prevent nonconsensual alterations of real people into revealing attire like bikinis. The changes follow new investigations by California authorities, global blocks, and criticism over thousands of harmful images produced.

OpenAI reported a dramatic increase in child exploitation incidents to the National Center for Missing & Exploited Children during the first half of 2025, sending 80 times more reports than in the same period of 2024. The company attributed the rise to expanded moderation capabilities, new features allowing image uploads, and rapid user growth. This spike reflects broader concerns about child safety in generative AI platforms.

AI에 의해 보고됨

Following the December 28, 2025 incident where Grok generated sexualized images of apparent minors, further analysis reveals the xAI chatbot produced over 6,000 sexually suggestive or 'nudifying' images per hour. Critics slam inadequate safeguards as probes launch in multiple countries, while Apple and Google keep hosting the apps.

Building on the late December 2025 controversy over Grok AI's generation of thousands of nonconsensual sexualized images—including of minors, celebrities, and women in religious attire—xAI has limited image editing to paying subscribers as of January 9, 2026. Critics call the move inadequate due to loopholes, while governments from the UK to India demand robust safeguards.

AI에 의해 보고됨

일본 내각부, Grok AI의 동의 없는 성화 이미지 생성 방지 위해 X에 안전장치 강화 요청. 경제안보상임대신 오노다 키미, 조사 공개하며 딥페이크와 프라이버시 침해 우려 강조.

California Attorney General Rob Bonta has issued a cease-and-desist letter to xAI, following an investigation into its AI chatbot Grok generating nonconsensual explicit images. The action targets the creation of deepfakes depicting real people, including minors, in sexualized scenarios without permission. Bonta's office requires xAI to respond within five days on corrective measures.

AI에 의해 보고됨

Scammers are sending emails that appear genuine to OpenAI users, designed to manipulate them into revealing critical data swiftly. These emails are followed by vishing calls that intensify the pressure on victims to disclose account details. The campaign highlights ongoing risks in AI platform security.

 

 

 

이 웹사이트는 쿠키를 사용합니다

사이트를 개선하기 위해 분석을 위한 쿠키를 사용합니다. 자세한 내용은 개인정보 보호 정책을 읽으세요.
거부