A seventh lawsuit has been added to the growing legal action against OpenAI by families of victims from the February Tumbler Ridge school shooting, alleging the company's ChatGPT oversight enabled the attack. Filed in San Francisco federal court, the suits claim OpenAI failed to alert authorities despite flagging the shooter's account. OpenAI has expressed regret over not acting sooner.
The latest suit brings the total to seven filed on behalf of victims' families, following six earlier cases lodged last week that highlighted internal safety flags on shooter Jesse Van Rootselaar's June 2025 ChatGPT activity related to gun violence planning. Those complaints, including one for survivor Maya Gebala, accused OpenAI of deactivating the account without police notification, allowing a new one.
This escalates scrutiny after CEO Sam Altman's recent apology for the lapse—eight months before the February 10 tragedy, where the 18-year-old former student killed five children, an education assistant, her mother, and half-brother before dying by suicide.
OpenAI reiterated its zero-tolerance policy and detailed new safeguards like enhanced threat detection. The cases build on prior suits, including one over a ChatGPT-linked teen suicide, pushing AI accountability amid the broader Tumbler Ridge controversy.