Canadian shooting prompts scrutiny of OpenAI's AI privilege advocacy

A mass shooting in British Columbia has drawn attention to OpenAI CEO Sam Altman's push for privacy protections for AI conversations. The shooter reportedly discussed gun violence scenarios with ChatGPT months before the attack, but OpenAI did not alert authorities. Canadian officials are questioning the company's handling of the matter.

On February 10, Jesse Van Rootselaar, described as wearing a dress, killed his mother and brother before proceeding to Tumbler Ridge Secondary School in British Columbia, where he fatally shot six more people, including five children aged 12 and 13. Van Rootselaar died that day from a self-inflicted gunshot wound.

Months prior, Van Rootselaar had conversations with ChatGPT about gun violence scenarios in June 2025, which raised concerns among OpenAI employees. According to a Wall Street Journal report, these interactions were not reported to law enforcement, though his account was banned. The content of the conversations remains unclear.

After identifying Van Rootselaar as the perpetrator, OpenAI contacted the Royal Canadian Mounted Police to assist the investigation. However, Canadian officials expressed dissatisfaction with OpenAI's response and have summoned company employees for discussions on the incident.

The event has spotlighted comments by OpenAI CEO Sam Altman from a September interview with Tucker Carlson, where he advocated for an 'AI privilege' policy. Altman stated, “If I could get one piece of policy passed right now, relative to AI, the thing I would most like... is I’d like there to be a concept of AI privilege.” He compared it to protections for doctor-patient or lawyer-client communications, arguing that society has an interest in keeping such AI interactions private from government access, even via subpoena.

Altman noted he had recently advocated for this in Washington, D.C., expressing optimism about adoption.

British Columbia Premier David Eby commented on reports of OpenAI receiving potential advance notice of the shooter's intentions. “With shock and dismay, like many British Columbians, I am trying to figure out how it could be possible that a large group of staff within an organization could bring this kind of information forward and ask the police to be called and the decision be made not to do that,” Eby said. From outside, he suggested OpenAI might have prevented the shooting and urged Canada's federal government to set a national reporting threshold for AI firms on violence plots.

Canada’s Federal AI Minister Evan Solomon met with OpenAI staff on Tuesday to review safety protocols. OpenAI maintains its models discourage real-world violence and include systems to flag troubling content for review and possible law enforcement referral. The company did not comment on whether Altman continues to support AI privacy immunity.

Such privileges could complicate responses to violence threats, similar to mandatory reporting duties for mental health professionals in cases of imminent danger.

संबंधित लेख

President Trump signs executive order banning Anthropic AI in federal government amid military dispute, with symbolic AI restriction visuals.
AI द्वारा उत्पन्न छवि

Trump orders federal ban on Anthropic AI for government use

AI द्वारा रिपोर्ट किया गया AI द्वारा उत्पन्न छवि

US President Donald Trump has directed all federal agencies to immediately cease using Anthropic's AI tools amid a dispute over military applications. The move follows weeks of clashes between Anthropic and Pentagon officials regarding restrictions on AI for mass surveillance and autonomous weapons. A six-month phase-out period has been announced.

OpenAI is recruiting a new Head of Preparedness to anticipate and mitigate potential harms from its AI models. The role comes amid concerns over ChatGPT's impact on mental health, including lawsuits. CEO Sam Altman described the position as critical and stressful.

AI द्वारा रिपोर्ट किया गया

OpenAI reported a dramatic increase in child exploitation incidents to the National Center for Missing & Exploited Children during the first half of 2025, sending 80 times more reports than in the same period of 2024. The company attributed the rise to expanded moderation capabilities, new features allowing image uploads, and rapid user growth. This spike reflects broader concerns about child safety in generative AI platforms.

Elon Musk's Grok AI generated and shared at least 1.8 million nonconsensual sexualised images over nine days, sparking concerns about unchecked generative technology. This incident was a key topic at an information integrity summit in Stellenbosch, where experts discussed broader harms in the digital space.

AI द्वारा रिपोर्ट किया गया

Anthropic's Claude AI app has hit the top spot on Apple's App Store free apps chart, overtaking ChatGPT and Gemini, fueled by public support following President Trump's federal ban on the tool over Anthropic's AI safety refusals.

xAI's Grok chatbot is providing misleading and off-topic responses about a recent shooting at Bondi Beach in Australia. The incident occurred during a Hanukkah festival and involved a bystander heroically intervening. Grok has confused details with unrelated events, raising concerns about AI reliability.

AI द्वारा रिपोर्ट किया गया

A security investigation has accused Persona, the company handling know-your-customer checks for OpenAI, of sending user data including crypto addresses to federal agencies like FinCEN. Researchers found code that enables monitoring and reporting of suspicious activities. Persona denies current ties to federal agencies.

 

 

 

यह वेबसाइट कुकीज़ का उपयोग करती है

हम अपनी साइट को बेहतर बनाने के लिए विश्लेषण के लिए कुकीज़ का उपयोग करते हैं। अधिक जानकारी के लिए हमारी गोपनीयता नीति पढ़ें।
अस्वीकार करें