
Canadian Artificial Intelligence Minister Evan Solomon expressed "disappointment" following a Tuesday meeting with OpenAI senior officials in Ottawa, saying the company failed to present substantial new safety measures after it was revealed the Tumbler Ridge shooter's ChatGPT account had been banned seven months before the mass shooting that killed eight people.
Solomon said he expected OpenAI to arrive with concrete proposals showing how the company had changed its protocols in the wake of the February 10 tragedy, but "we did not hear any substantial new safety protocols outside of some changes to their model."
The Timeline of Warnings
OpenAI banned Jesse Van Rootselaar's ChatGPT account in June 2025 after internal automated screening systems flagged posts involving gun violence. The company said the account's activity constituted "misuses of our models in furtherance of violent activities" but determined it did not meet the threshold for alerting law enforcement because it didn't identify "credible or imminent planning."
On February 10, 2026—seven months later—Van Rootselaar killed her mother and half-brother at the family home in Tumbler Ridge, British Columbia, before going to the local secondary school where she killed five students and an educational assistant before taking her own life.
The Wall Street Journal reported that OpenAI employees wanted the company to alert police in June over the shooter's posts but were rebuffed by management. OpenAI contacted RCMP only after the shooting occurred, requesting contact information on February 12.
The Missed Meeting Disclosure
Adding to government frustration, an OpenAI representative met with British Columbia officials on February 11—one day after the shooting—for a previously scheduled discussion about the company potentially opening a Canadian office. OpenAI did not mention during that meeting that it had banned the shooter's account or possessed potential evidence related to the massacre.
"OpenAI did not inform any member of government that they had potential evidence regarding the shootings in Tumbler Ridge," Premier David Eby's office said in a statement.
Government Considers Regulation
Public Safety Minister Gary Anandasangaree, who attended Tuesday's Ottawa meeting alongside Solomon and Culture Minister Marc Miller, said "nothing substantial came out of it other than an expectation from us that they need to do a lot better."
Solomon said he anticipates further meetings with OpenAI but didn't rule out government regulation. "All options for us are on the table, because at the end of the day, Canadians want to feel safe."
British Columbia Premier David Eby told CBC's Power & Politics he is "quite angry" about OpenAI's handling and urged the federal government to create a national standard for when AI companies must report users plotting violence. "It will have to be done carefully, but ensuring a consistent standard for all AI companies across the country is required."
Legal and Ethical Questions
UBC computer science professor emeritus Alan Mackworth, who focuses on AI safety and ethics, noted that professionals like teachers and doctors have legal "duty to report" obligations for suspected harm to minors. "These obligations are enshrined in law and/or professional ethics. Similar obligations should be placed on social media and AI companies," he said.
A US lawyer representing families suing OpenAI said this is not the first time the company has failed to alert authorities when users displayed violent intent, suggesting a pattern of prioritizing privacy concerns over public safety warnings.
Justice Minister Sean Fraser said law enforcement is gathering information about what conversations are taking place on AI platforms that police are "currently blind to, that would be very informative, that would help us prevent tragedies in the future."
OpenAI's Response
OpenAI said in a statement that senior leaders traveled to Ottawa "to discuss our overall approach to safety, safeguards we have in place and how we continuously work to strengthen them." The company confirmed cooperation with RCMP and said it is undertaking a review to determine whether its processes could be improved.
However, Canadian officials characterized the meeting as providing no concrete commitments beyond vague promises of future proposals, leaving families and government leaders questioning whether OpenAI's current protocols are adequate to prevent similar tragedies.



