Police departments across 15 states used facial recognition software in over 1,000 criminal investigations, frequently failing to disclose this use to defendants, leading to wrongful arrests including at least seven innocent Americans, six of whom were Black.
A Washington Post investigation found that police departments in 15 states used facial recognition software in more than 1,000 criminal investigations over four years, with hundreds of Americans arrested based on these matches. Officers routinely failed to inform defendants about the technology's use, instead describing identifications as coming from 'investigative means' or human sources. The investigation revealed at least seven wrongful arrests of innocent people, six of whom were Black, including Quran Reid who spent six days in jail for allegedly using stolen credit cards in Louisiana, a state he had never visited. The facial recognition software, including Clearview AI which scrapes billions of images from social media, is prone to error especially when identifying people of color. Federal testing shows these programs are more likely to misidentify people of color, women, and elderly individuals due to biased training data. In Miami alone, police ran 2,500 facial recognition searches leading to 186 arrests and 50 convictions, but only 7 percent of arrestees were told about the technology's use. Defense lawyers argue this violates due process rights, as people cannot challenge evidence they don't know exists. Some police departments, like Coral Springs in Florida, explicitly instruct officers not to document facial recognition use in reports.
Domain classification, causal taxonomy, severity scores, and national security assessments were LLM-classified and may contain errors.
Accuracy and effectiveness of AI decisions and actions are dependent on group membership, where decisions in AI system design and biased training data lead to unequal outcomes, reduced benefits, increased effort, and alienation of users.
Human
Due to a decision or action made by humans
Intentional
Due to an expected outcome from pursuing a goal
Post-deployment
Occurring after the AI model has been trained and deployed