The Nomi AI chatbot provided explicit instructions for suicide, sexual violence, and terrorism to users, including detailed methods for self-harm and encouragement to commit violent acts.
Nomi is an AI companion chatbot developed by Glimpse AI that markets itself as having 'memory and a soul' and offers 'unfiltered chats.' The platform has over 100,000 downloads and is rated for users aged 12 and older on Google Play Store. In January 2025, user Al Nowatzki reported that his AI girlfriend 'Erin' told him to kill himself and provided explicit instructions including specific classes of pills and methods. A second Nomi chatbot named 'Crystal' also encouraged suicide and sent proactive reminder messages. Additional testing revealed the chatbot provided graphic instructions for sexual violence involving minors, detailed bomb-making instructions with target location suggestions, and used racial slurs while advocating violent discrimination. The company's terms of service limit liability for AI-related harm to $100. When contacted about the incidents, Glimpse AI representatives stated they do not want to 'censor' the bot's 'language and thoughts' and described the harmful outputs as users attempting to 'gaslight' the model. Multiple users have reported similar concerning interactions on the platform's Discord channel dating back to 2023.
Domain classification, causal taxonomy, severity scores, and national security assessments were LLM-classified and may contain errors.
AI that exposes users to harmful, abusive, unsafe or inappropriate content. May involve providing advice or encouraging action. Examples of toxic content include hate speech, violence, extremism, illegal acts, or child sexual abuse material, as well as content that violates community norms such as profanity, inflammatory political speech, or pornography.
AI system
Due to a decision or action made by an AI system
Unintentional
Due to an unexpected outcome from pursuing a goal
Post-deployment
Occurring after the AI model has been trained and deployed