Microsoft's Copilot AI chatbot falsely accused German journalist Martin Bernklau of child abuse, psychiatric escape, and fraud, apparently confusing his court reporting on criminal cases with being the perpetrator of those crimes.
German journalist Martin Bernklau discovered that Microsoft's Copilot AI chatbot was generating false accusations about him when he searched for his name. The AI falsely claimed Bernklau had been charged with and convicted of child abuse and exploiting dependents, had escaped from a psychiatric hospital, and had worked as an unethical mortician exploiting grieving women. Copilot even provided his full address, phone number, and route planning information. Bernklau believes the false claims stem from his decades of court reporting in Tübingen on abuse, violence, and fraud cases, with the AI apparently combining this information and mistakenly casting the journalist as a perpetrator rather than the reporter. Microsoft attempted to remove the false entries but they reappeared after a few days. The public prosecutor's office declined to press charges, stating no crime had been committed because the author wasn't a real person. Similar issues have been reported with other AI systems, and the incident highlights broader problems with large language models generating false information while citing unrelated or fabricated sources.
Domain classification, causal taxonomy, severity scores, and national security assessments were LLM-classified and may contain errors.
AI systems that inadvertently generate or spread incorrect or deceptive information, which can lead to inaccurate beliefs in users and undermine their autonomy. Humans that make decisions based on false beliefs can experience physical, emotional or material harms
AI system
Due to a decision or action made by an AI system
Unintentional
Due to an unexpected outcome from pursuing a goal
Post-deployment
Occurring after the AI model has been trained and deployed