Multiple AI chatbots from major companies failed to provide accurate, timely information about breaking political news events during the 2024 election period, including President Biden's withdrawal from the presidential race and the assassination attempt on Donald Trump.
During the 2024 election period, major AI chatbots including ChatGPT, Meta AI, Google's Gemini, Microsoft's Copilot, and Perplexity consistently failed to provide accurate real-time information about breaking political news. When President Biden announced his withdrawal from the 2024 campaign, most chatbots said he had not dropped out or declined to answer for hours afterward. Similarly, hours after the July 13 shooting at Trump's rally in Butler, Pennsylvania, ChatGPT claimed rumors of an assassination attempt were misinformation, while Meta AI said it had nothing credible about an assassination attempt. The chatbots also struggled with other breaking news including Trump naming J.D. Vance as his running mate and Biden testing positive for COVID-19. Some companies implemented guardrails that redirected election-related queries to search engines, while others provided outdated or incorrect information. Microsoft's Copilot performed better with faster updates and source linking, but still had restrictions. The failures occurred despite these AI systems being marketed as having access to recent information and being suggested for catching up on current events.
Domain classification, causal taxonomy, severity scores, and national security assessments were LLM-classified and may contain errors.
AI systems that inadvertently generate or spread incorrect or deceptive information, which can lead to inaccurate beliefs in users and undermine their autonomy. Humans that make decisions based on false beliefs can experience physical, emotional or material harms
AI system
Due to a decision or action made by an AI system
Unintentional
Due to an unexpected outcome from pursuing a goal
Post-deployment
Occurring after the AI model has been trained and deployed