The UK Home Office's automated passport photo checker system exhibited racial bias, incorrectly rejecting photos from dark-skinned people at significantly higher rates than light-skinned people.
The UK Home Office deployed an automated photo quality checker on its passport application website to detect photos that do not meet official requirements, including having a neutral expression, closed mouth, and looking straight at the camera. A BBC investigation using over 1,000 photographs of politicians found systematic bias in the system's performance. Dark-skinned women were told their photos were poor quality 22% of the time compared to 14% for light-skinned women, while dark-skinned men faced rejection 15% of the time versus 9% for light-skinned men. Photos of women with the darkest skin were four times more likely to be graded poor quality than women with the lightest skin. The system was supplied by an unnamed external provider. Freedom of information documents from 2019 revealed the Home Office was aware of this problem but decided the 'overall performance' was good enough to launch the checker. Over nine million people have used this service. Individual users like Elaine Owusu and Cat Hallam experienced multiple photo rejections with incorrect reasons such as 'mouth looks open' or 'reflections on your face' despite meeting the actual requirements.
Domain classification, causal taxonomy, severity scores, and national security assessments were LLM-classified and may contain errors.
Accuracy and effectiveness of AI decisions and actions are dependent on group membership, where decisions in AI system design and biased training data lead to unequal outcomes, reduced benefits, increased effort, and alienation of users.
AI system
Due to a decision or action made by an AI system
Unintentional
Due to an unexpected outcome from pursuing a goal
Post-deployment
Occurring after the AI model has been trained and deployed