Facebook's ad delivery algorithms systematically showed job advertisements to different demographic groups based on gender and race, even when advertisers set neutral targeting parameters, resulting in discriminatory ad delivery that violates US employment law.
Researchers at the University of Southern California conducted an audit of Facebook's ad delivery system by purchasing pairs of job advertisements with identical qualifications but for companies with different real-world gender demographics. Despite Facebook disabling demographic targeting for employment ads in March 2019 after settling lawsuits, the algorithms continued to show ads to statistically distinct demographic groups. For example, Domino's pizza delivery ads were shown more to men while Instacart grocery delivery ads were shown more to women, despite both jobs having nearly identical qualifications. The same pattern occurred with software engineer positions at Nvidia (skewed male) versus Netflix (skewed female), and sales associate roles for cars (skewed male) versus jewelry (skewed female). The researchers found this discrimination was caused by Facebook's market optimization effects and the platform's predictions about ad 'relevance' to different user groups. This study followed previous research dating back to 2016 when ProPublica first revealed Facebook allowed advertisers to exclude audiences by race and gender, and subsequent 2019 studies showing continued algorithmic discrimination in housing ads. The researchers noted no improvement in Facebook's ad delivery algorithms between their 2019 audit and this study, despite ongoing litigation and promises from Facebook to address these issues.
Domain classification, causal taxonomy, severity scores, and national security assessments were LLM-classified and may contain errors.
Unequal treatment of individuals or groups by AI, often based on race, gender, or other sensitive characteristics, resulting in unfair outcomes and unfair representation of those groups.
AI system
Due to a decision or action made by an AI system
Unintentional
Due to an unexpected outcome from pursuing a goal
Post-deployment
Occurring after the AI model has been trained and deployed
No population impact data reported.