A facial recognition system deployed by Buenos Aires authorities in 2019 led to at least 140 false positive arrests, including cases where innocent people were detained for days due to database errors, while judicial investigation revealed potential misuse for surveillance purposes.
The Buenos Aires government deployed a facial recognition system across 75% of the capital area in 2019 as part of its surveillance infrastructure for public security. The system was designed to identify wanted criminals by matching faces captured on cameras against criminal databases. However, the system generated at least 140 false positive matches that led to police stops or arrests of innocent people. One notable case involved Guillermo Federico Ibarrola, who was arrested and spent five days in prison because the system incorrectly identified him as a criminal who had committed robbery 600 kilometers away in a different city. Data privacy activists sued the city after discovering these errors. A judicial investigation ordered by a judge found evidence of irregularities and suspected that the facial recognition system may have been misused to create a Big Data database or conduct individual surveillance beyond its stated crime prevention purpose. The system was deactivated during the COVID-19 pandemic in 2020 and has remained offline due to precautionary judicial measures. The city government, which had claimed the system helped catch almost 1,700 wanted criminals, is now in a legal battle to reactivate the facial recognition feature.
Domain classification, causal taxonomy, severity scores, and national security assessments were LLM-classified and may contain errors.
AI systems that fail to perform reliably or effectively under varying conditions, exposing them to errors and failures that can have significant consequences, especially in critical applications or areas that require moral reasoning.
AI system
Due to a decision or action made by an AI system
Unintentional
Due to an unexpected outcome from pursuing a goal
Post-deployment
Occurring after the AI model has been trained and deployed