Rite Aid deployed facial recognition technology in hundreds of stores from 2012-2020 that generated thousands of false-positive matches, disproportionately affecting Black, Latino, Asian, and women customers who were wrongly accused of shoplifting and subjected to harassment, searches, and public humiliation.
From October 2012 to July 2020, Rite Aid deployed AI-based facial recognition technology in hundreds of its retail pharmacy locations to identify customers deemed likely to engage in shoplifting or criminal behavior. The company contracted with two unnamed vendors to create a database of 'persons of interest' containing tens of thousands of images, often low-quality photos from security cameras or employee phones. When customers entered stores, the system would alert employees via phone if it matched someone in the database, instructing them to take actions like surveillance, searches, or removal from stores. The Federal Trade Commission found that the system generated thousands of false-positive matches and disproportionately affected people of color and women. Rite Aid failed to test the system's accuracy, enforce image quality controls, adequately train employees, or monitor false positives. The technology was primarily deployed in urban areas with large Black, Latino, and Asian communities. Customers were wrongly accused of crimes in front of family and friends, subjected to searches, banned from stores, and had police called on them. In one case, an 11-year-old girl was stopped and searched due to a false match. The FTC reached a settlement requiring Rite Aid to cease using facial recognition technology for five years and delete all collected images.
Domain classification, causal taxonomy, severity scores, and national security assessments were LLM-classified and may contain errors.
Unequal treatment of individuals or groups by AI, often based on race, gender, or other sensitive characteristics, resulting in unfair outcomes and unfair representation of those groups.
AI system
Due to a decision or action made by an AI system
Unintentional
Due to an unexpected outcome from pursuing a goal
Post-deployment
Occurring after the AI model has been trained and deployed