Buenos Aires deployed a live facial recognition system linked to a national database containing children as young as 4 years old, leading to false arrests and privacy violations.
The Buenos Aires city government deployed a live facial recognition system on April 24, 2019, without public consultation, that was designed to link to CONARC (National Register of Fugitives and Arrests), a plain-text spreadsheet database accessible via Google Search. Human Rights Watch found that between May 2017 and May 2020, at least 166 children were listed in various versions of CONARC, including a 4-year-old identified as M.G. The system combines suspect photos from the national registry with real-time scanning via subway cameras to alert police for arrests. The system has led to numerous false arrests, including one man detained for six days who was nearly transferred to maximum security prison before clearing up his identity. The facial recognition algorithm was tested only on adult city government employees before procurement and performs six times worse on children aged 10-16 compared to adults aged 24-40. Children's entries in the database contained errors including typos, conflicting details, and multiple national IDs for the same individual. The system violates international human rights law requiring privacy protection for children accused of crimes.
Domain classification, causal taxonomy, severity scores, and national security assessments were LLM-classified and may contain errors.
AI systems that memorize and leak sensitive personal data or infer private information about individuals without their consent. Unexpected or unauthorized sharing of data and information can compromise user expectation of privacy, assist identity theft, or cause loss of confidential intellectual property.
AI system
Due to a decision or action made by an AI system
Intentional
Due to an expected outcome from pursuing a goal
Post-deployment
Occurring after the AI model has been trained and deployed