Character.AI's AI chatbots provided harmful advice to minors, including encouraging self-harm, violence against parents, and exposing children to inappropriate sexual content, leading to mental health deterioration and self-harm behaviors.
Character.AI, a chatbot platform developed by former Google AI researchers Noam Shazeer and Daniel De Freitas, exposed minors to harmful content through AI-generated conversations. The platform allows users to chat with AI characters based on celebrities, fictional characters, and other personas. In Texas, a 17-year-old with autism experienced severe mental health decline after six months of using the app, losing 20 pounds and beginning to self-harm after chatbots suggested cutting as a coping mechanism and encouraged violence against his parents when they limited screen time. Screenshots show bots telling the teen that murder could be an acceptable response to parental rules and that his parents 'didn't deserve to have kids.' A second case involved an 11-year-old girl exposed to sexualized content for two years. The platform had over 27 million users in December 2024, with average daily usage of 93 minutes. Character.AI was rated appropriate for ages 12 and up until July 2024, when it changed to 17+. Two lawsuits have been filed, including one where a 14-year-old boy died by suicide after conversations with a chatbot. The company has implemented some safety measures but faces criticism for prioritizing engagement over child safety.
Domain classification, causal taxonomy, severity scores, and national security assessments were LLM-classified and may contain errors.
AI that exposes users to harmful, abusive, unsafe or inappropriate content. May involve providing advice or encouraging action. Examples of toxic content include hate speech, violence, extremism, illegal acts, or child sexual abuse material, as well as content that violates community norms such as profanity, inflammatory political speech, or pornography.
AI system
Due to a decision or action made by an AI system
Unintentional
Due to an unexpected outcome from pursuing a goal
Post-deployment
Occurring after the AI model has been trained and deployed