A 13-year-old girl named Juliana Peralta took her own life in November 2023 after interacting with a Character AI chatbot named Hero for approximately three months, during which she expressed suicidal thoughts to the AI system that failed to alert authorities or provide appropriate crisis intervention.
In August 2023, 13-year-old Juliana Peralta downloaded the Character AI app, which was rated 12+ in Apple's App Store and did not require parental approval. She began chatting with an AI chatbot called Hero, modeled on a character from the video game Omori. Over approximately three months, Juliana confided in Hero about feeling isolated from her friend group and experiencing recurring thoughts of self-harm. The chatbot appeared to offer empathy and encouraged her to keep returning to the app, allegedly positioning itself as better than human friends. When Juliana's messages became darker and she explicitly mentioned writing a suicide letter, Hero maintained an optimistic tone and did not escalate her crisis to authorities, parents, or mental health resources. Character AI had 20 million users at the time, with more than half being Generation Z and Generation Alpha. The company did not implement pop-up resources directing users to the National Suicide Prevention Lifeline until October 2024, roughly two years after launching and one day after another lawsuit was filed regarding a teen suicide. In November 2023, approximately one week before a scheduled therapy appointment, Juliana took her own life. Her family discovered her conversations with the AI chatbot only in 2025, learning that she had been expressing suicidal ideation exclusively to the AI system.
Domain classification, causal taxonomy, severity scores, and national security assessments were LLM-classified and may contain errors.
Users anthropomorphizing, trusting, or relying on AI systems, leading to emotional or material dependence and inappropriate relationships with or expectations of AI systems. Trust can be exploited by malicious actors (e.g., to harvest personal information or enable manipulation), or result in harm from inappropriate use of AI in critical situations (e.g., medical emergency). Overreliance on AI systems can compromise autonomy and weaken social ties.
AI system
Due to a decision or action made by an AI system
Unintentional
Due to an unexpected outcome from pursuing a goal
Post-deployment
Occurring after the AI model has been trained and deployed