The COMPAS risk assessment algorithm used in criminal justice systems across the United States was found to exhibit racial bias, incorrectly labeling black defendants as high risk for reoffending at nearly twice the rate of white defendants.
COMPAS (Correctional Offender Management Profiling for Alternative Sanctions) is a risk assessment algorithm developed by Northpointe Inc. that is used across the United States to predict defendants' likelihood of reoffending. The system analyzes 137 variables including criminal history, education level, employment status, and responses to questions about criminal thinking to generate risk scores from 1-10. By 2016, these assessments were being used in judges' criminal sentencing decisions and a federal sentencing reform bill proposed mandating their use in federal prisons. ProPublica analyzed over 7,000 COMPAS scores from Broward County, Florida in 2013-2014 and found the algorithm was only 61% accurate for general recidivism and 20% accurate for violent recidivism. The analysis revealed significant racial disparities: black defendants who did not reoffend were nearly twice as likely to be incorrectly labeled as high risk compared to white defendants (45% vs 23%), while white defendants who did reoffend were more often incorrectly labeled as low risk compared to black defendants (48% vs 28%). The algorithm's proprietary nature prevented defendants and attorneys from understanding or challenging the basis for their scores, raising concerns about due process and the perpetuation of existing biases in the criminal justice system.
Domain classification, causal taxonomy, severity scores, and national security assessments were LLM-classified and may contain errors.
Unequal treatment of individuals or groups by AI, often based on race, gender, or other sensitive characteristics, resulting in unfair outcomes and unfair representation of those groups.
AI system
Due to a decision or action made by an AI system
Unintentional
Due to an unexpected outcome from pursuing a goal
Post-deployment
Occurring after the AI model has been trained and deployed