Facial recognition technology used by UK police was found to have significantly higher false positive rates for Black and Asian people compared to white people, with Black women experiencing the highest error rates at 9.9%.
The UK Home Office revealed that facial recognition technology used to search the police national database showed racial bias in testing conducted by the National Physical Laboratory (NPL). The testing found that the false positive identification rate for white subjects was 0.04%, compared to 4% for Asian subjects and 5.5% for Black subjects. Black women were particularly affected with a 9.9% false positive rate compared to 0.4% for Black men. This bias was discovered as the Labour government prepares a nationwide rollout of live facial recognition cameras and expansion to scan government databases including passports and immigration records. The Metropolitan Police's annual report showed that out of 3.1 million images processed, there were 10 false alerts where 8 involved Black or ethnic minorities. In 6 of these cases, individuals were approached by police and questioned for under five minutes before being released. The Home Office stated they have procured a new algorithm that shows no statistically significant bias and will be tested in early 2024. Police and Crime Commissioners expressed concern about the 'inbuilt bias' and called for stronger safeguards before national expansion.
Domain classification, causal taxonomy, severity scores, and national security assessments were LLM-classified and may contain errors.
Accuracy and effectiveness of AI decisions and actions are dependent on group membership, where decisions in AI system design and biased training data lead to unequal outcomes, reduced benefits, increased effort, and alienation of users.
AI system
Due to a decision or action made by an AI system
Unintentional
Due to an unexpected outcome from pursuing a goal
Post-deployment
Occurring after the AI model has been trained and deployed