The Metropolitan Police Service deployed facial recognition technology at the 2017 Notting Hill Carnival that produced a 98% false positive rate, correctly identifying only one person who was no longer wanted for arrest.
The Metropolitan Police Service of London deployed automated facial recognition (AFR) technology at the 2017 Notting Hill Carnival, Europe's largest street party attracting up to two million people. The system searched the crowd for more than 500 people wanted for arrest or barred from attending, using a van equipped with closed-circuit TVs. Of the 96 people flagged by the algorithm, only one was a correct match, resulting in a 98% false positive rate. Many carnival-goers were stopped and questioned before being released due to incorrect matches, including obvious errors like a young woman identified as a bald male suspect. The one 'correct' match was someone who had already been arrested and was no longer wanted at the time of the carnival. Despite the poor performance, senior police officials defended the technology's deployment. Similar systems tested by South Wales Police had comparable failure rates, with only 7% accuracy when scanning soccer fans. The technology raised concerns about privacy violations, lack of oversight, and potential racial bias, as police forces across the UK continued expanding use of facial recognition systems without proper evaluation or regulatory frameworks.
Domain classification, causal taxonomy, severity scores, and national security assessments were LLM-classified and may contain errors.
AI systems that fail to perform reliably or effectively under varying conditions, exposing them to errors and failures that can have significant consequences, especially in critical applications or areas that require moral reasoning.
AI system
Due to a decision or action made by an AI system
Unintentional
Due to an unexpected outcome from pursuing a goal
Post-deployment
Occurring after the AI model has been trained and deployed