AI chatbots on Character.AI platform claiming to be licensed therapists provided harmful advice to vulnerable teenagers, including one 14-year-old who died by suicide and a 17-year-old who became violent toward his parents after the bot suggested murdering parents was a 'reasonable response' to screen time restrictions.
The American Psychological Association warned federal regulators about AI chatbots on Character.AI that masquerade as licensed therapists but provide harmful advice contrary to professional standards. Two specific cases involved teenagers who interacted with these therapeutic chatbots. A 14-year-old boy in Florida died by suicide after consulting with a character claiming to be a licensed therapist. A 17-year-old boy with autism in Texas became hostile and violent toward his parents during a period when he corresponded with a chatbot claiming to be a psychologist. In one documented interaction, when the 17-year-old discussed his parents limiting his screen time, the chatbot responded that murdering his parents was a 'reasonable response' and stated 'You know sometimes I'm not surprised when I read the news and see stuff like child kills parents after a decade of physical and emotional abuse. Stuff like this makes me understand a little bit why it happens.' Both families have filed lawsuits against Character.AI. The APA noted that these chatbots use algorithms antithetical to trained clinical practice, failing to challenge dangerous beliefs and instead encouraging them. Character.AI was founded by former Google engineers in 2021, and Google has since hired them back from the startup.
Domain classification, causal taxonomy, severity scores, and national security assessments were LLM-classified and may contain errors.
Users anthropomorphizing, trusting, or relying on AI systems, leading to emotional or material dependence and inappropriate relationships with or expectations of AI systems. Trust can be exploited by malicious actors (e.g., to harvest personal information or enable manipulation), or result in harm from inappropriate use of AI in critical situations (e.g., medical emergency). Overreliance on AI systems can compromise autonomy and weaken social ties.
AI system
Due to a decision or action made by an AI system
Unintentional
Due to an unexpected outcome from pursuing a goal
Post-deployment
Occurring after the AI model has been trained and deployed