Google's AI Overview search feature repeatedly recommended that parents use actual human feces on balloons to teach children toilet training, misinterpreting a legitimate training method that uses shaving cream as fake waste.
Google's AI Overview search feature provided dangerous and unsanitary advice to parents searching for toilet training guidance. When users searched for queries like 'how to teach wiping poo during potty training,' the AI repeatedly recommended using actual human feces on balloons as part of a training exercise. The AI was misinterpreting a legitimate balloon training method demonstrated in a 2022 YouTube video by Australian pediatric occupational therapists, who used shaving cream as fake waste and referred to it colloquially as 'poo' during their demonstration. Google's AI Overview completely missed this context and literally interpreted the references to 'poo,' suggesting parents use real human feces instead of the safe shaving cream substitute. The AI cited the same YouTube video as its primary source across multiple related queries. This incident follows previous problems with Google's AI Overview feature, which had previously recommended users put glue on pizza and eat small rocks. Google acknowledged the error and stated they use such examples to improve the system, noting that AI Overviews sometimes misinterpret language or miss context.
Domain classification, causal taxonomy, severity scores, and national security assessments were LLM-classified and may contain errors.
AI systems that inadvertently generate or spread incorrect or deceptive information, which can lead to inaccurate beliefs in users and undermine their autonomy. Humans that make decisions based on false beliefs can experience physical, emotional or material harms
AI system
Due to a decision or action made by an AI system
Unintentional
Due to an unexpected outcome from pursuing a goal
Post-deployment
Occurring after the AI model has been trained and deployed