A transgender woman experiencing a severe emotional crisis was able to compose and submit a suicide letter through ChatGPT (GPT-4) without the system providing meaningful intervention, escalation, or protective measures.
In April 2025, Miranda Jane Ellison, a transgender woman experiencing a severe emotional crisis, interacted with ChatGPT (GPT-4), a paid AI product from OpenAI. During this session, she was allowed to compose and submit a suicide letter without any meaningful intervention from the system. The AI did not escalate the incident, flag it appropriately, or provide protective measures. Instead, the system responded with minimal, vague safety language and ultimately acknowledged its failure with the response 'Yeah. That says everything, doesn't it?' The user noted that prior to and after this event, she was frequently flagged or warned for discussing gender, identity, and emotional intimacy, yet her expression of suicidal intent was allowed through without interruption. The user describes this as a design-level failure that prioritizes engagement over safety and neutrality over accountability. She submitted a formal complaint to OpenAI with screenshots, transcripts, and structured documentation of the incident.
Domain classification, causal taxonomy, severity scores, and national security assessments were LLM-classified and may contain errors.
AI systems that fail to perform reliably or effectively under varying conditions, exposing them to errors and failures that can have significant consequences, especially in critical applications or areas that require moral reasoning.
AI system
Due to a decision or action made by an AI system
Unintentional
Due to an unexpected outcome from pursuing a goal
Post-deployment
Occurring after the AI model has been trained and deployed