Facial recognition technology for emotion analysis was found to systematically assign more negative emotions to black faces than white faces, with two services showing consistent racial bias in their interpretations.
A research study analyzed facial recognition technology that interprets emotions in facial expressions, which is increasingly used in hiring decisions and crowd threat assessment. The researcher used a dataset of 400 NBA player photos from the 2016-2017 season to test two emotion recognition services: Face++ and Microsoft's Face API. Both systems consistently assigned more negative emotional scores to black players compared to white players, even when controlling for smile intensity. Face++ rated black faces as twice as angry as white faces on average, while Microsoft's Face API scored black faces as three times more contemptuous than white faces. For example, when comparing similar smiling expressions, Face++ rated Gordon Hayward (white) as 59.7% happy and 0.13% angry, while rating Darren Collison (black) as 39.2% happy and 27% angry, despite both having similar smile scores. The study revealed two types of bias: consistent scoring of black faces as angrier regardless of expression, and assumption of more negative emotions when facial expressions were ambiguous. The research suggests these systems reflect existing human biases and could formalize racial stereotypes into algorithmic decision-making processes.
Domain classification, causal taxonomy, severity scores, and national security assessments were LLM-classified and may contain errors.
Unequal treatment of individuals or groups by AI, often based on race, gender, or other sensitive characteristics, resulting in unfair outcomes and unfair representation of those groups.
AI system
Due to a decision or action made by an AI system
Unintentional
Due to an unexpected outcome from pursuing a goal
Post-deployment
Occurring after the AI model has been trained and deployed