Google Photos' image recognition software mistakenly labeled photos of black people as 'gorillas' in 2015, leading to public outrage and Google's decision to completely remove gorilla-related tags from the system rather than fix the underlying algorithmic bias.
In June 2015, Google's newly launched Photos app used machine learning technology to automatically tag and categorize uploaded images. Brooklyn computer programmer Jacky Alciné discovered that the app had labeled photos of him and his black friend as 'gorillas.' Alciné posted screenshots on Twitter, prompting a swift response from Google's chief social architect Yonatan Zunger, who apologized and promised immediate fixes. Google initially attempted to adjust the algorithm but ultimately removed the 'gorilla' tag entirely from the app's database. The incident highlighted biases in AI training data, as Google had insufficient photos of black people in its training dataset. Similar issues occurred with other photo platforms like Flickr, which labeled black people as 'apes.' Follow-up investigations in 2018 revealed that Google had maintained its workaround by blocking searches for 'gorilla,' 'chimp,' 'chimpanzee,' and 'monkey' rather than developing a comprehensive solution. The incident became a prominent example of algorithmic bias and the challenges of deploying AI systems without diverse training data and testing.
Domain classification, causal taxonomy, severity scores, and national security assessments were LLM-classified and may contain errors.
Unequal treatment of individuals or groups by AI, often based on race, gender, or other sensitive characteristics, resulting in unfair outcomes and unfair representation of those groups.
AI system
Due to a decision or action made by an AI system
Unintentional
Due to an unexpected outcome from pursuing a goal
Post-deployment
Occurring after the AI model has been trained and deployed