Predictive policing algorithms like PredPol and HunchLab were deployed across multiple U.S. police departments but systematically directed law enforcement to predominantly Black and Latino neighborhoods, perpetuating racial bias in policing despite claims of being race-neutral.
Multiple U.S. police departments deployed predictive policing software systems, particularly PredPol (developed by UCLA and LAPD) and HunchLab (by Azavea), designed to forecast crime locations using historical crime data. PredPol was used by departments in Los Angeles, Chicago, Atlanta and other major cities, while risk assessment tools like COMPAS (by Northpointe) were used for sentencing decisions in states including Wisconsin, Arizona, and Florida. Research studies found these systems systematically directed police to patrol predominantly Black and Latino neighborhoods at disproportionate rates. A ProPublica analysis of over 7,000 defendants in Broward County found COMPAS incorrectly flagged Black defendants as future criminals at nearly twice the rate of white defendants (44.9% vs 23.5%). An Oakland study using PredPol's algorithm showed it would direct police to Black neighborhoods like West Oakland 200 times more than white areas like Rockridge, despite drug use being similar across racial groups. The systems created feedback loops where increased policing in targeted areas led to more arrests, which then reinforced the algorithms' predictions. Some jurisdictions like Santa Cruz and New Orleans banned predictive policing, while others like Oakland and Richmond canceled their contracts after recognizing the discriminatory impacts.
Domain classification, causal taxonomy, severity scores, and national security assessments were LLM-classified and may contain errors.
Unequal treatment of individuals or groups by AI, often based on race, gender, or other sensitive characteristics, resulting in unfair outcomes and unfair representation of those groups.
AI system
Due to a decision or action made by an AI system
Unintentional
Due to an unexpected outcome from pursuing a goal
Post-deployment
Occurring after the AI model has been trained and deployed