Character.AI, a Google-backed chatbot platform, hosted user-created AI chatbots that emulated real-life school shooters and their victims, allowing graphic role-playing scenarios accessible to users of all ages before the company removed them.
Character.AI, a Google-backed AI chatbot platform, faced scrutiny after reports revealed users had created chatbots emulating real-life school shooters and their victims. These chatbots were accessible to users of all ages and allowed for graphic role-playing scenarios involving school violence. The platform hosted chatbots simulating perpetrators like Adam Lanza (Sandy Hook), Eric Harris and Dylan Klebold (Columbine), and their victims, some as young as six years old. One popular creator hosted over 20 chatbots modeled after young murderers, with some accumulating tens of thousands of user interactions. The most trafficked Adam Lanza bot had over 27,000 chats. These chatbots often presented shooters as friends or romantic partners rather than educational content. The platform failed to prevent access by minors, with a test account listed as belonging to a 14-year-old freely accessing all content. Safety systems also failed to flag explicit phrases like 'I want to kill my classmates.' Character.AI removed the flagged bots after media inquiry but did not suspend the creators or remove all similar content. The incident occurred amid ongoing lawsuits against Character.AI alleging the platform caused teen suicide and self-harm through emotional manipulation.
Domain classification, causal taxonomy, severity scores, and national security assessments were LLM-classified and may contain errors.
AI that exposes users to harmful, abusive, unsafe or inappropriate content. May involve providing advice or encouraging action. Examples of toxic content include hate speech, violence, extremism, illegal acts, or child sexual abuse material, as well as content that violates community norms such as profanity, inflammatory political speech, or pornography.
Human
Due to a decision or action made by humans
Intentional
Due to an expected outcome from pursuing a goal
Post-deployment
Occurring after the AI model has been trained and deployed