Studies found that COMPAS, a widely-used algorithmic risk assessment tool for predicting criminal recidivism, performs no better than untrained volunteers and exhibits racial bias in its predictions.
COMPAS (Correctional Offender Management Profiling for Alternative Sanctions) is an algorithmic tool developed by Equivant (formerly Northpointe) that has been used to assess over one million defendants since 1998. The system uses 137 variables to predict whether defendants will reoffend within two years, with scores used by judges for bail and sentencing decisions. Multiple studies revealed significant problems with the system. Research by Dartmouth College professors Hany Farid and Julia Dressel found that COMPAS achieved only 65% accuracy in predicting recidivism, compared to 67% accuracy achieved by untrained volunteers recruited through Amazon Mechanical Turk who used only seven pieces of information. A simple linear algorithm using just two factors (age and number of prior convictions) matched COMPAS performance at 66% accuracy. ProPublica's 2016 analysis of over 7,000 defendants in Broward County, Florida found that while COMPAS predicted recidivism correctly for black and white defendants at similar rates, it exhibited systematic racial bias in its errors. Black defendants were nearly twice as likely to be incorrectly flagged as high-risk (45% vs 23% false positive rate), while white defendants were nearly twice as likely to be incorrectly labeled as low-risk when they did reoffend (48% vs 28% false negative rate). The proprietary nature of COMPAS prevents external scrutiny of its methodology, raising due process concerns.
Domain classification, causal taxonomy, severity scores, and national security assessments were LLM-classified and may contain errors.
Unequal treatment of individuals or groups by AI, often based on race, gender, or other sensitive characteristics, resulting in unfair outcomes and unfair representation of those groups.
AI system
Due to a decision or action made by an AI system
Unintentional
Due to an unexpected outcome from pursuing a goal
Post-deployment
Occurring after the AI model has been trained and deployed