Police departments across the United States used facial recognition AI systems to arrest suspects without corroborating evidence, resulting in at least eight wrongful arrests including Christopher Gatlin who spent 16 months in jail for a crime he didn't commit.
A Washington Post investigation found that law enforcement agencies across the nation are using facial recognition AI software as a shortcut to finding and arresting suspects without other evidence. The investigation reviewed documents from 23 police departments and found that 15 departments spanning 12 states arrested suspects identified through AI matches without any independent evidence connecting them to crimes. Christopher Gatlin, a 29-year-old father of four, was arrested in August 2021 after St. Louis County transit police detective Matthew Shute uploaded a blurry surveillance image to a facial recognition program that scans hundreds of thousands of mug shots. Despite poor image quality showing a hooded and masked face, the AI generated several possible matches including Gatlin. Police then conducted an improper photo lineup with the assault victim Michael Feldman, who had suffered brain injury and initially couldn't remember the attackers. Detective Matthew Welle steered Feldman toward identifying Gatlin despite the victim's uncertainty. Gatlin was arrested based solely on this flawed identification and spent 16 months in jail before charges were dropped in March 2024. The Post identified at least eight people wrongfully arrested after being identified through facial recognition, with six cases previously reported and two new cases including Gatlin and Jason Vernau from Miami. All cases were eventually dismissed, and police could have eliminated most suspects through basic investigative work. The investigation found that hundreds of police departments use facial recognition, with Clearview AI claiming 3,100 departments as customers.
Domain classification, causal taxonomy, severity scores, and national security assessments were LLM-classified and may contain errors.
AI systems that fail to perform reliably or effectively under varying conditions, exposing them to errors and failures that can have significant consequences, especially in critical applications or areas that require moral reasoning.
Human
Due to a decision or action made by humans
Unintentional
Due to an unexpected outcome from pursuing a goal
Post-deployment
Occurring after the AI model has been trained and deployed