Harvard professor Latanya Sweeney discovered that Google searches of names typically associated with Black people were 25% more likely to generate advertisements suggesting criminal records compared to names associated with white people.
Harvard University professor Latanya Sweeney conducted a study examining racial bias in Google's advertising algorithms after discovering that searching her own name generated an ad asking 'Latanya Sweeney, Arrested?' despite having no criminal record. She analyzed over 2,000 racially-associated names across Google.com and Reuters.com, finding that names typically given to Black babies (such as DeShawn, Darnell, Jermaine) generated ads suggesting arrest records 81-95% of the time, while names typically given to white babies (such as Geoffrey, Jill, Emma) generated such ads only 23-60% of the time. On Reuters.com, Black-identifying names were 25% more likely to trigger arrest-related ads. The ads came primarily from InstantCheckmate.com, a background check service, and appeared regardless of whether the person actually had an arrest record. The study involved 2,184 names and found statistically significant discrimination in ad delivery. Both Google and InstantCheckmate denied engaging in racial profiling, with potential explanations including algorithmic learning from user click patterns or societal biases reflected in search behavior.
Domain classification, causal taxonomy, severity scores, and national security assessments were LLM-classified and may contain errors.
Unequal treatment of individuals or groups by AI, often based on race, gender, or other sensitive characteristics, resulting in unfair outcomes and unfair representation of those groups.
AI system
Due to a decision or action made by an AI system
Unintentional
Due to an unexpected outcome from pursuing a goal
Post-deployment
Occurring after the AI model has been trained and deployed