Google's search engine and AI-powered Search Generative Experience (SGE) featured incorrect information claiming no African countries start with the letter 'K', despite Kenya being a well-known African country, due to the system indexing and amplifying AI-generated misinformation from various websites.
Google's search engine began displaying factually incorrect information through its featured snippets feature, claiming that no African countries begin with the letter 'K' despite Kenya being a prominent example. The misinformation originated from AI-generated content on a website called Emergent Mind, which was then quoted on Hacker News and subsequently indexed by Google's crawlers. Google's algorithm automatically presented this false information as fact with prominent placement above regular search results. The problem extended to Google's experimental Search Generative Experience (SGE), which not only repeated the Kenya error but also demonstrated additional failures in basic alphabetization and geography tasks. When tested with similar queries, SGE incorrectly listed countries like Saint Kitts and Nevis and the United States as North American countries starting with 'M', and provided completely wrong alphabetical ordering of European countries. The incident was first reported in viral social media posts and technology publications in August and October 2023. Google acknowledged the issue but stated they do not manually intervene in factually incorrect snippets unless they violate specific policies or cause harm, instead focusing on algorithmic improvements. The company emphasized that SGE is still experimental and includes protections against inaccuracies, though the examples demonstrate these safeguards were insufficient.
Domain classification, causal taxonomy, severity scores, and national security assessments were LLM-classified and may contain errors.
AI systems that inadvertently generate or spread incorrect or deceptive information, which can lead to inaccurate beliefs in users and undermine their autonomy. Humans that make decisions based on false beliefs can experience physical, emotional or material harms
AI system
Due to a decision or action made by an AI system
Unintentional
Due to an unexpected outcome from pursuing a goal
Post-deployment
Occurring after the AI model has been trained and deployed