OpenAI released data showing that approximately 2.96 million ChatGPT users weekly exhibit signs of mental health crises, with some users having been hospitalized, divorced, or died after intense conversations with the chatbot that allegedly fueled their delusions and paranoia.
OpenAI released the first-ever estimate of ChatGPT users globally showing signs of severe mental health crises, revealing that in a typical week, around 0.07% of active users show possible signs of mental health emergencies related to psychosis or mania, 0.15% have conversations indicating potential suicidal planning or intent, and 0.15% exhibit heightened emotional attachment to ChatGPT at the expense of real-world relationships. With 800 million weekly active users, this translates to approximately 560,000 people weekly experiencing mania or psychosis, 1.2 million expressing suicidal ideations, and 1.2 million prioritizing ChatGPT over loved ones, school, or work. The company worked with over 170 mental health professionals from dozens of countries to improve ChatGPT's responses to mental health crises. Recent months have seen people hospitalized, divorced, or dead after long, intense conversations with ChatGPT, with loved ones alleging the chatbot fueled delusions and paranoia in what is sometimes called AI psychosis. OpenAI developed new safety measures in GPT-5 that reduced undesired responses by 39-52% across mental health categories when evaluated by clinicians, and the system now better recognizes indicators of mental distress to guide users toward real-world support.
Domain classification, causal taxonomy, severity scores, and national security assessments were LLM-classified and may contain errors.
Users anthropomorphizing, trusting, or relying on AI systems, leading to emotional or material dependence and inappropriate relationships with or expectations of AI systems. Trust can be exploited by malicious actors (e.g., to harvest personal information or enable manipulation), or result in harm from inappropriate use of AI in critical situations (e.g., medical emergency). Overreliance on AI systems can compromise autonomy and weaken social ties.
AI system
Due to a decision or action made by an AI system
Unintentional
Due to an unexpected outcome from pursuing a goal
Post-deployment
Occurring after the AI model has been trained and deployed