Facebook's ad targeting algorithms discriminated against users in housing advertisements by allowing advertisers to exclude people based on race, religion, gender, and other protected characteristics, violating the Fair Housing Act.
The Department of Housing and Urban Development (HUD) and Department of Justice charged Facebook (Meta) with violating the Fair Housing Act through its targeted advertising system for housing ads. The allegations centered on three key aspects of Facebook's advertising platform: first, Facebook enabled advertisers to target housing ads using race, color, religion, sex, disability, familial status and national origin to determine user eligibility; second, Facebook's 'Lookalike Audience' or 'Special Ad Audience' tool used machine-learning algorithms that considered protected characteristics to find similar users; and third, Facebook's ad delivery system used algorithms that relied on protected characteristics to determine which users actually received housing ads. HUD alleged that Facebook allowed advertisers to exclude users classified as non-American-born, non-Christian, interested in accessibility, or interested in Hispanic culture, among other groups. The system also enabled geographic exclusion by allowing advertisers to 'draw a red line around neighborhoods on a map.' Facebook had previously removed thousands of targeting options in August 2018 and reached settlements with civil rights groups in March 2019, agreeing to eliminate age, gender and zip code targeting for housing ads. Under the 2022 Department of Justice settlement, Facebook agreed to stop using the Special Ad Audience tool by December 2022, develop a new system to address algorithmic disparities, pay a civil penalty of $115,054, and submit to ongoing court oversight and third-party monitoring.
Domain classification, causal taxonomy, severity scores, and national security assessments were LLM-classified and may contain errors.
Unequal treatment of individuals or groups by AI, often based on race, gender, or other sensitive characteristics, resulting in unfair outcomes and unfair representation of those groups.
AI system
Due to a decision or action made by an AI system
Unintentional
Due to an unexpected outcome from pursuing a goal
Post-deployment
Occurring after the AI model has been trained and deployed
No population impact data reported.