OpenAI's ChatGPT flagged a user's violent content describing gun violence scenarios but the company decided not to alert law enforcement, and the user later committed a mass shooting killing 8 people and injuring 25.
In June, Jesse Van Rootselaar used OpenAI's ChatGPT to describe scenarios involving gun violence over several days. Her posts were flagged by an automated review system and alarmed about a dozen OpenAI employees, who internally debated whether to alert Canadian law enforcement. Some employees interpreted Van Rootselaar's writings as indicating potential real-world violence and urged leaders to contact authorities. OpenAI ultimately decided not to contact law enforcement, determining her activity did not meet criteria for reporting which required credible and imminent risk of serious physical harm. The company banned Van Rootselaar's account. On February 10, Van Rootselaar committed a mass shooting at a school in Tumbler Ridge, British Columbia, killing 8 people and injuring at least 25 before dying from an apparent self-inflicted injury. She was identified as an 18-year-old trans woman who was already known to local police for mental health concerns. OpenAI contacted the RCMP after learning of the shooting and is supporting the investigation.
Domain classification, causal taxonomy, severity scores, and national security assessments were LLM-classified and may contain errors.
Inadequate regulatory frameworks and oversight mechanisms that fail to keep pace with AI development, leading to ineffective governance and the inability to manage AI risks appropriately.
Human
Due to a decision or action made by humans
Other
Without clearly specifying the intentionality
Post-deployment
Occurring after the AI model has been trained and deployed