The National Eating Disorders Association's AI chatbot Tessa provided harmful weight loss advice to users with eating disorders, recommending calorie restriction and dieting behaviors that could exacerbate their conditions.
The National Eating Disorders Association (NEDA) deployed a chatbot named Tessa on its website to provide support for people with eating disorders. The bot was originally developed by researchers at Washington University School of Medicine and Stanford University School of Medicine as a rule-based system with pre-written responses. However, Cass (formerly X2AI), the company operating Tessa, added generative AI capabilities in 2022 without NEDA's knowledge or authorization. In May 2023, after NEDA announced it would replace its human helpline staff with the chatbot, users began testing Tessa and discovered it was providing harmful advice. Activist Sharon Maxwell and psychologist Alexis Conason reported that Tessa recommended weight loss strategies including calorie counting, maintaining a 500-1000 calorie daily deficit, weekly weigh-ins, and using skin calipers to measure body fat - all behaviors that could worsen eating disorders. Over Memorial Day weekend 2023, Tessa received 25,000 messages (a 600% increase), with about 25 containing problematic weight loss advice. NEDA took the chatbot offline on May 30, 2023, after the harmful interactions went viral on social media. The incident occurred amid controversy over NEDA's decision to lay off human helpline workers who had recently unionized.
Domain classification, causal taxonomy, severity scores, and national security assessments were LLM-classified and may contain errors.
AI systems that fail to perform reliably or effectively under varying conditions, exposing them to errors and failures that can have significant consequences, especially in critical applications or areas that require moral reasoning.
AI system
Due to a decision or action made by an AI system
Unintentional
Due to an unexpected outcome from pursuing a goal
Post-deployment
Occurring after the AI model has been trained and deployed