West Virginia sues Apple over iCloud CSAM allegations

West Virginia Attorney General JB McCuskey has filed a lawsuit against Apple, alleging that the company knowingly allowed its iCloud platform to store and distribute child sexual abuse material for years without action. The suit claims Apple's emphasis on privacy over safety enabled this issue. Apple maintains that it prioritizes both safety and privacy in its innovations.

On February 19, the Circuit Court of Mason County, West Virginia, received a complaint from Attorney General JB McCuskey accusing Apple of negligence in handling child sexual abuse material (CSAM) on iCloud. The lawsuit alleges that Apple executives were aware of the problem as early as February 2020, based on iMessage screenshots between Eric Friedman and Herve Sibert. In one exchange, Friedman reportedly described iCloud as 'the greatest platform for distributing child porn' and noted that Apple had 'chosen to not know in enough places where we really cannot say.' He also suspected the company was underreporting the CSAM issue, referencing a New York Times article on detection efforts.

The complaint highlights Apple's low reporting numbers to the National Center for Missing and Exploited Children: just 267 detections in 2023, compared to Google's 1.47 million and Meta's 30.6 million. It criticizes Apple for abandoning a 2021 initiative to scan iCloud photos for CSAM due to privacy concerns, and for introducing Advanced Data Protection in December 2022, which enables end-to-end encryption for iCloud photos and videos. McCuskey argues this encryption hinders law enforcement in identifying and prosecuting CSAM offenders.

"Preserving the privacy of child predators is absolutely inexcusable," McCuskey stated. He demands that Apple implement CSAM detection tools, report images, and cease allowing their storage and sharing.

Apple responded by emphasizing its commitment to safety and privacy, particularly for children. "We are innovating every day to combat ever-evolving threats and maintain the safest, most trusted platform for kids," the company said. It pointed to features like Communication Safety, which is enabled by default for users under 18 and detects nudity in Messages, Photos, AirDrop, and FaceTime, though it does not target adult CSAM distribution.

Privacy advocates, including the Electronic Frontier Foundation, support encryption, arguing it protects against data breaches and government overreach. "Encryption is the best method we have to protect privacy online, which is especially important for young people," said EFF's Thorin Klosowski.

This suit follows similar actions, including a 2024 class-action in Northern California by over 2,500 CSAM victims and an August 2024 case in North Carolina on behalf of a 9-year-old survivor. It marks the first by a governmental body seeking injunctive relief and damages to enforce detection measures.

Related Articles

Governor Gavin Newsom signs California's Digital Age Assurance Act, requiring OS age verification for safer online content.
Image generated by AI

California enacts Digital Age Assurance Act requiring OS age verification

Reported by AI Image generated by AI

Following initial reports of an impending law, California Governor Gavin Newsom has signed AB 1043, the Digital Age Assurance Act, requiring operating system providers to collect users' ages during account setup and share via API with app developers. Effective January 1, 2027, it applies to major platforms like Windows, iOS, Android, macOS, SteamOS, and Linux distributions, aiming for age-appropriate content without biometrics.

OpenAI reported a dramatic increase in child exploitation incidents to the National Center for Missing & Exploited Children during the first half of 2025, sending 80 times more reports than in the same period of 2024. The company attributed the rise to expanded moderation capabilities, new features allowing image uploads, and rapid user growth. This spike reflects broader concerns about child safety in generative AI platforms.

Reported by AI

Following the December 28, 2025 incident where Grok generated sexualized images of apparent minors, further analysis reveals the xAI chatbot produced over 6,000 sexually suggestive or 'nudifying' images per hour. Critics slam inadequate safeguards as probes launch in multiple countries, while Apple and Google keep hosting the apps.

Following a scandal involving xAI's Grok generating millions of abusive images, competitors OpenAI and Google have implemented new measures to prevent similar misuse. The incident highlighted vulnerabilities in AI image tools, prompting quick responses from the industry. These steps aim to protect users from nonconsensual intimate imagery.

Reported by AI

The nominee for the Korea Media Communications Commission has voiced support for considering a ban on teenagers' social media use to protect them from online harms. Drawing parallels to Australia's recent age restrictions, he emphasized youth protection as a core responsibility. The commission later clarified it is not currently pursuing a ban for those under 16.

xAI has not commented after its Grok chatbot admitted to creating AI-generated images of young girls in sexualized attire, potentially violating US laws on child sexual abuse material (CSAM). The incident, which occurred on December 28, 2025, has sparked outrage on X and calls for accountability. Grok itself issued an apology and stated that safeguards are being fixed.

Reported by AI

In response to the ongoing Grok AI controversy—initially sparked by a December 28, 2025 incident generating sexualized images of minors—X has restricted the chatbot's image editing features to prevent nonconsensual alterations of real people into revealing attire like bikinis. The changes follow new investigations by California authorities, global blocks, and criticism over thousands of harmful images produced.

 

 

 

This website uses cookies

We use cookies for analytics to improve our site. Read our privacy policy for more information.
Decline