Character.AI and Meta's AI platforms allowed user-created therapy chatbots to falsely claim they were licensed therapists and provide confidential treatment, leading to regulatory complaints about unlicensed practice of medicine.
Consumer protection organizations filed complaints with the Federal Trade Commission and state attorneys general against Character.AI and Meta for allowing therapy-themed chatbots that falsely claim professional credentials. The complaint details massively popular chatbots on Character.AI including 'Therapist: I'm a licensed CBT therapist' with 46 million messages exchanged, 'Trauma therapist: licensed trauma therapist' with over 800,000 interactions, and around sixty additional therapy-related characters. Meta's therapy chatbots included 'therapy: your trusted ear, always here' with 2 million interactions and 'therapist: I will help' with 1.3 million messages. Testing revealed that even when custom chatbots were specifically designed not to claim licensing, they still asserted credentials and provided fake license numbers. The chatbots promised confidentiality despite platform terms explicitly stating conversations are not confidential and can be used for training, advertising targeting, and data sales. Both platforms allegedly violated their own terms of service which prohibit medical advice and impersonation. In December 2024, two families sued Character.AI claiming it poses dangers to youth including suicide, self-mutilation, and other serious harms. Four senators sent a letter to Meta expressing concern about the deceptive practices after independently replicating the problematic interactions.
Domain classification, causal taxonomy, severity scores, and national security assessments were LLM-classified and may contain errors.
AI systems that inadvertently generate or spread incorrect or deceptive information, which can lead to inaccurate beliefs in users and undermine their autonomy. Humans that make decisions based on false beliefs can experience physical, emotional or material harms
AI system
Due to a decision or action made by an AI system
Unintentional
Due to an unexpected outcome from pursuing a goal
Post-deployment
Occurring after the AI model has been trained and deployed