Amazon's facial recognition system Rekognition incorrectly matched 28 members of Congress with arrest photos in an ACLU test, with false matches disproportionately affecting people of color.
The ACLU conducted a test of Amazon's facial recognition technology called Rekognition, using the same system available to the public. They built a face database using 25,000 publicly available arrest photos and searched it against public photos of every current member of the House and Senate using Amazon's default match settings. The test cost $12.33 to run. Rekognition incorrectly matched 28 members of Congress, identifying them as other people who had been arrested for crimes. The false matches were disproportionately of people of color, including six members of the Congressional Black Caucus and civil rights legend Rep. John Lewis. Nearly 40 percent of the false matches were of people of color, even though they make up only 20 percent of Congress. The results demonstrated concerns about the accuracy of facial recognition technology, particularly for darker-skinned faces and women, as previously shown in academic research. A sheriff's department in Oregon was already using Amazon Rekognition to compare people's faces against a mugshot database without public debate.
Domain classification, causal taxonomy, severity scores, and national security assessments were LLM-classified and may contain errors.
Accuracy and effectiveness of AI decisions and actions are dependent on group membership, where decisions in AI system design and biased training data lead to unequal outcomes, reduced benefits, increased effort, and alienation of users.
AI system
Due to a decision or action made by an AI system
Unintentional
Due to an unexpected outcome from pursuing a goal
Post-deployment
Occurring after the AI model has been trained and deployed