A risk assessment algorithm implemented in Virginia's criminal justice system in 2002 led to longer sentences for young and Black defendants while failing to reduce overall incarceration rates or recidivism.
Virginia implemented a statewide algorithmic risk assessment system in 2002 to help judges make sentencing decisions for felony convictions, with the goal of reducing prison populations after discretionary parole was abolished. The system assigned risk scores to defendants based on factors including offense type, age, prior convictions, employment status, and marital status. The algorithm was designed to identify low-risk offenders for shorter sentences or alternative programs like probation. Analysis of tens of thousands of felony convictions between 2000-2004 by researchers Megan Stevenson and Jennifer Doleac found that while judges followed the algorithm's recommendations less than half the time, the system failed to achieve its primary goal of reducing incarceration rates or recidivism. Instead, defendants with higher risk scores received longer sentences and those with lower scores got shorter ones, with the effects canceling each other out. The study documented significant racial disparities: Black defendants were 4 percentage points more likely to be incarcerated and received sentences 17 percent longer than equivalent white defendants. Young defendants under 23 were also disproportionately affected, being 4 percentage points more likely to be incarcerated with sentences 12 percent longer than older peers. The algorithm heavily weighted age as a predictor, with defendants under 30 receiving 13 points compared to 9 points for having five or more prior incarcerations.
Domain classification, causal taxonomy, severity scores, and national security assessments were LLM-classified and may contain errors.
Unequal treatment of individuals or groups by AI, often based on race, gender, or other sensitive characteristics, resulting in unfair outcomes and unfair representation of those groups.
AI system
Due to a decision or action made by an AI system
Unintentional
Due to an unexpected outcome from pursuing a goal
Post-deployment
Occurring after the AI model has been trained and deployed