
OpenAI saw the smoke months before the fire, then decided it wasn’t “imminent” enough to call the cops.
Quick Take
- OpenAI flagged a Canadian user in June 2025 for content tied to “furtherance of violent activities,” then banned the account after internal debate.
- Company leaders reportedly declined to alert police because the material did not meet a threshold of “imminent and credible risk of serious physical harm.”
- On Feb. 10, 2026, Jesse Van Rootselaar killed eight people in Tumbler Ridge, B.C., after first killing family members at home, then died by suicide.
- After the attack, OpenAI proactively contacted the RCMP and provided account details as the investigation widened to devices and online activity.
What OpenAI Flagged, and Why the Decision Still Matters
OpenAI’s systems reportedly detected Jesse Van Rootselaar’s ChatGPT use in June 2025 across several days, describing scenarios involving gun violence. Automated tools flagged it, humans reviewed it, and an internal discussion followed that included roughly a dozen employees. The company chose the step it fully controlled: it banned the account. The step it avoided was the one society argues about: notifying law enforcement.
That fork in the road exposes the hardest question in modern tech: when does a private platform become a de facto early-warning system? Conservatives tend to distrust tech companies acting like shadow governments, yet also expect accountability when preventable harm occurs. OpenAI’s reported rationale leaned on a high bar: no “imminent and credible” threat. The trouble is that real-world violence rarely announces itself with courtroom-grade clarity.
Tumbler Ridge: A Small Place Where Distance Doesn’t Reduce Tragedy
Tumbler Ridge sits in northern British Columbia, a remote community where bad news travels faster than help. Investigators said the attack began at home: Van Rootselaar killed his mother and stepbrother, then went to the local school. The school attack killed six more, including five students ages 12 to 13 and a 39-year-old teaching assistant. Van Rootselaar then died from a self-inflicted gunshot.
Canada’s stricter gun laws add a second layer to the story: people assume mass shootings are “less likely” there, so signals can look like noise until they don’t. The RCMP has emphasized a thorough review of devices, social media, and online activity. That methodical approach makes sense, but it also underlines the uncomfortable reality that much of the relevant evidence now lives on private servers long before it reaches a police evidence locker.
The “Imminent Threat” Standard: Reasonable on Paper, Cruel in Practice
OpenAI’s reported internal threshold echoes language Americans recognize from workplace and school safety protocols: credible, imminent, serious harm. The intent is rational. Companies shouldn’t flood police with false alarms or create a culture where edgy writing triggers a knock at the door. A conservative, common-sense view values due process and rejects pre-crime fantasies. Yet the June 2025 content was reportedly not abstract fiction; it was tied to violent scenarios that crossed OpenAI’s own policy lines.
The deeper issue is asymmetry. The downside of over-reporting is mostly institutional: wasted time, privacy disputes, potential liability, and reputational damage. The downside of under-reporting is human: funerals, shattered towns, and years of trauma. Leaders inside companies feel that asymmetry after the fact, when every “not enough to report” decision gets replayed with a body count. That doesn’t mean leaders acted in bad faith; it means the policy may be built for the wrong enemy.
What a Chatbot Can and Cannot Reveal About Intent
ChatGPT logs can show curiosity, fantasy, or planning; they cannot read a soul. A user can ask about weapons in ways that are lawful, fictional, or merely morbid. That ambiguity argues against knee-jerk referrals. Still, the reported flag here involved repeated violent content over days and a conclusion that it furthered violent activity, which is stronger than the “guy searched something weird once” caricature. When a platform reaches that conclusion, a ban alone can feel like locking one door in a house with many exits.
Platforms also face a second limitation: even if they do report, law enforcement may not be able to act without more. That reality fuels hesitation, because nobody wants to spark a process that goes nowhere while creating the appearance of surveillance. The practical middle ground is clearer escalation pathways, better-defined referral triggers, and a documented handoff that protects privacy while acknowledging patterns that a single officer might never see on their own.
After the Shooting: Cooperation Arrived When Prevention Was No Longer Possible
After the Feb. 10, 2026 killings, OpenAI contacted the RCMP proactively and said it would support the investigation. The RCMP confirmed contact as it continued examining evidence, with motive still unclear in public reporting. That sequence matters: cooperation after a tragedy is expected; it’s also the lowest-stakes moment for a company. The reputational risk shifts from “why did you report?” to “why didn’t you report sooner?”
Public trust erodes when tech firms appear to manage danger like a customer-service ticket: terminate account, close case. Americans over 40 remember when threats required proximity and effort; now intent can incubate online, privately, and at speed. A conservative perspective doesn’t demand that OpenAI play cop, but it does demand transparency about where the line sits and who decided it. When the line is “imminent,” families will ask who defines imminent, and with what incentives.
The Policy Fight Coming Next Will Hit Every Platform, Not Just OpenAI
This case will pressure lawmakers and platforms to formalize what has been informal: when a private company should alert police, and what “credible” means when the evidence is text prompts and behavioral patterns. Expect debates about mandated reporting, audit trails, and liability shields. Expect pushback, too, because the same tools that might spot a potential killer can also be abused to target political speech. Common sense says both risks are real, so rules must be narrow, reviewable, and hard to weaponize.
The lasting lesson is not that a chatbot “caused” anything. The lesson is that modern institutions now sit on early signals they never asked to hold. If they choose to hold them, they owe the public a standard that withstands daylight. If they choose not to, they should stop implying they can keep communities safe. Half-measures breed the worst outcome: privacy isn’t truly protected, and lives aren’t truly guarded.
Sources:
ChatGPT-maker OpenAI considered alerting Canadian police about school shooting suspect months ago


