The Justice Department's Pattern algorithm used for determining federal prisoner eligibility for early release under the First Step Act was found to have significant racial disparities, overpredicting recidivism risk for Black, Hispanic and Asian prisoners while underpredicting violent crime risk for some inmates of color.
The Justice Department developed an algorithmic risk assessment tool called Pattern to determine which federal prisoners could qualify for early release programs under the First Step Act of 2018. The algorithm was designed to assess the risk that a person in prison would return to crime, with only those classified as low or minimal risk eligible for credits toward early release. In December 2021, the Justice Department reported that Pattern produced uneven results with persistent racial disparities. The algorithm overpredicted recidivism risk for many Black, Hispanic and Asian people while also underpredicting violent crime risk for some inmates of color. Only 7% of Black people in the sample were classified as minimum risk compared to 21% of white people. About 14,000 men and women in federal prison were placed in wrong risk categories due to math and human errors in the system. The implementation was rushed due to tight Congressional deadlines, requiring subsequent tweaks after discovering the errors. Attorney General Merrick Garland has directed the department to assess racial bias and improve transparency, with another overhaul of Pattern already underway.
Domain classification, causal taxonomy, severity scores, and national security assessments were LLM-classified and may contain errors.
Unequal treatment of individuals or groups by AI, often based on race, gender, or other sensitive characteristics, resulting in unfair outcomes and unfair representation of those groups.
AI system
Due to a decision or action made by an AI system
Unintentional
Due to an unexpected outcome from pursuing a goal
Post-deployment
Occurring after the AI model has been trained and deployed