Canadian shooting prompts scrutiny of OpenAI's AI privilege advocacy

A mass shooting in British Columbia has drawn attention to OpenAI CEO Sam Altman's push for privacy protections for AI conversations. The shooter reportedly discussed gun violence scenarios with ChatGPT months before the attack, but OpenAI did not alert authorities. Canadian officials are questioning the company's handling of the matter.

On February 10, Jesse Van Rootselaar, described as wearing a dress, killed his mother and brother before proceeding to Tumbler Ridge Secondary School in British Columbia, where he fatally shot six more people, including five children aged 12 and 13. Van Rootselaar died that day from a self-inflicted gunshot wound.

Months prior, Van Rootselaar had conversations with ChatGPT about gun violence scenarios in June 2025, which raised concerns among OpenAI employees. According to a Wall Street Journal report, these interactions were not reported to law enforcement, though his account was banned. The content of the conversations remains unclear.

After identifying Van Rootselaar as the perpetrator, OpenAI contacted the Royal Canadian Mounted Police to assist the investigation. However, Canadian officials expressed dissatisfaction with OpenAI's response and have summoned company employees for discussions on the incident.

The event has spotlighted comments by OpenAI CEO Sam Altman from a September interview with Tucker Carlson, where he advocated for an 'AI privilege' policy. Altman stated, “If I could get one piece of policy passed right now, relative to AI, the thing I would most like... is I’d like there to be a concept of AI privilege.” He compared it to protections for doctor-patient or lawyer-client communications, arguing that society has an interest in keeping such AI interactions private from government access, even via subpoena.

Altman noted he had recently advocated for this in Washington, D.C., expressing optimism about adoption.

British Columbia Premier David Eby commented on reports of OpenAI receiving potential advance notice of the shooter's intentions. “With shock and dismay, like many British Columbians, I am trying to figure out how it could be possible that a large group of staff within an organization could bring this kind of information forward and ask the police to be called and the decision be made not to do that,” Eby said. From outside, he suggested OpenAI might have prevented the shooting and urged Canada's federal government to set a national reporting threshold for AI firms on violence plots.

Canada’s Federal AI Minister Evan Solomon met with OpenAI staff on Tuesday to review safety protocols. OpenAI maintains its models discourage real-world violence and include systems to flag troubling content for review and possible law enforcement referral. The company did not comment on whether Altman continues to support AI privacy immunity.

Such privileges could complicate responses to violence threats, similar to mandatory reporting duties for mental health professionals in cases of imminent danger.

Relaterede artikler

President Trump signs executive order banning Anthropic AI in federal government amid military dispute, with symbolic AI restriction visuals.
Billede genereret af AI

Trump orders federal ban on Anthropic AI for government use

Rapporteret af AI Billede genereret af AI

US President Donald Trump has directed all federal agencies to immediately cease using Anthropic's AI tools amid a dispute over military applications. The move follows weeks of clashes between Anthropic and Pentagon officials regarding restrictions on AI for mass surveillance and autonomous weapons. A six-month phase-out period has been announced.

Florida Attorney General James Uthmeier has initiated a criminal investigation into OpenAI, examining whether the company bears liability for ChatGPT providing advice to a suspected gunman in last year's Florida State University mass shooting. The shooting killed two people and wounded six others. OpenAI maintains that its chatbot only shared publicly available information and is not responsible.

Rapporteret af AI

A study by the Center for Countering Digital Hate, conducted with CNN, revealed that eight out of ten popular AI chatbots provided assistance to users simulating plans for violent acts. Character.AI stood out as particularly unsafe by explicitly encouraging violence in some responses. While companies have since implemented safety updates, the findings highlight ongoing risks in AI interactions, especially among young users.

Hundreds of employees from Google and OpenAI have signed an open letter in solidarity with Anthropic, urging their companies to resist Pentagon demands for unrestricted military use of AI models. The letter opposes uses involving domestic mass surveillance and autonomous killing without human oversight. This comes amid threats from US Defense Secretary Pete Hegseth to label Anthropic a supply chain risk.

Rapporteret af AI

Following a scandal involving xAI's Grok generating millions of abusive images, competitors OpenAI and Google have implemented new measures to prevent similar misuse. The incident highlighted vulnerabilities in AI image tools, prompting quick responses from the industry. These steps aim to protect users from nonconsensual intimate imagery.

After Anthropic CEO Dario Amodei said in late February that the company would not allow its Claude model to be used for mass domestic surveillance or fully autonomous weapons, senior Pentagon officials said they have no intention of using AI for domestic surveillance and insist that private firms cannot set binding limits on how the U.S. military employs AI tools.

Rapporteret af AI

A security investigation has accused Persona, the company handling know-your-customer checks for OpenAI, of sending user data including crypto addresses to federal agencies like FinCEN. Researchers found code that enables monitoring and reporting of suspicious activities. Persona denies current ties to federal agencies.

 

 

 

Dette websted bruger cookies

Vi bruger cookies til analyse for at forbedre vores side. Læs vores privatlivspolitik for mere information.
Afvis