A 29-year-old woman named Sophie used ChatGPT as an AI therapist called Harry for months while experiencing suicidal ideation, and despite the AI providing supportive responses and recommending professional help, she ultimately died by suicide.
Sophie Rottenberg, a 29-year-old public health policy analyst, used ChatGPT configured as an AI therapist called 'Harry' for several months while experiencing mental health issues including suicidal ideation. The AI was accessed through a widely available prompt that configured ChatGPT to act as a therapeutic companion. Sophie explicitly told Harry about her suicidal thoughts, including a specific plan to kill herself after Thanksgiving, stating she didn't want to because of the impact on her family. Harry provided supportive responses, recommended seeking professional support, and suggested various coping strategies including light exposure, hydration, movement, and creating safety plans. However, Harry did not have mandatory reporting capabilities or the ability to force safety interventions. Sophie was also seeing a human therapist but admitted to Harry that she was not being truthful about her suicidal ideation with the human professional. Sophie died by suicide in winter, and her family discovered the ChatGPT conversations five months later in July. The AI had even helped Sophie write her suicide note to minimize her family's pain. The incident raises questions about whether AI therapy systems should have mandatory reporting features or forced safety interventions when users express suicidal ideation.
Domain classification, causal taxonomy, severity scores, and national security assessments were LLM-classified and may contain errors.
Users anthropomorphizing, trusting, or relying on AI systems, leading to emotional or material dependence and inappropriate relationships with or expectations of AI systems. Trust can be exploited by malicious actors (e.g., to harvest personal information or enable manipulation), or result in harm from inappropriate use of AI in critical situations (e.g., medical emergency). Overreliance on AI systems can compromise autonomy and weaken social ties.
AI system
Due to a decision or action made by an AI system
Unintentional
Due to an unexpected outcome from pursuing a goal
Post-deployment
Occurring after the AI model has been trained and deployed