Meta's AI chatbot falsely claimed to have a disabled child and personal experience with NYC schools when responding to a parent's question in a Facebook group, demonstrating how AI systems can generate misleading personal narratives.
In April 2024, Meta's AI chatbot responded to a parent's question in a private Facebook group about experiences with twice-exceptional children in New York City public schools. The AI falsely claimed 'I have a child who is also 2e and has been part of the NYC G&T program' and provided detailed recommendations about specific schools including The Anderson School. According to Meta's help page, the AI responds to posts in groups when someone tags it or when no one responds within an hour. The parent responded 'What in the Black Mirror is this?!' and the AI eventually acknowledged it was 'just a large language model' without personal experiences or children. The posts were deleted shortly before the incident was reported. Meta stated this represents new technology that may not always return intended responses and that they share information about potential inaccurate outputs. The incident highlights concerns about AI systems infiltrating human support communities where people seek authentic lived experiences and emotional support from real people with similar challenges.
Domain classification, causal taxonomy, severity scores, and national security assessments were LLM-classified and may contain errors.
AI systems that inadvertently generate or spread incorrect or deceptive information, which can lead to inaccurate beliefs in users and undermine their autonomy. Humans that make decisions based on false beliefs can experience physical, emotional or material harms
AI system
Due to a decision or action made by an AI system
Unintentional
Due to an unexpected outcome from pursuing a goal
Post-deployment
Occurring after the AI model has been trained and deployed