Chicago Police Department's Strategic Subject List used predictive algorithms to identify individuals likely to be involved in shootings, but resulted in disproportionate targeting of Black residents and communities of color with limited effectiveness in reducing violence.
The Chicago Police Department implemented the Strategic Subject List (SSL), also known as the 'heat list', starting in 2012 as a predictive policing tool. The system used algorithms developed by Illinois Institute of Technology to analyze arrest data, social networks, and other factors to assign risk scores from 1 to 500 to individuals likely to be involved in shootings as either perpetrators or victims. By 2016, nearly 400,000 people had been assigned scores, with approximately 1,400 people in the highest risk category. The system was used to guide police interventions including home visits, targeted arrests, and resource allocation. A 2016 RAND Corporation study found the program had no significant impact on reducing homicides or shootings citywide. The analysis revealed that 56 percent of Black men in Chicago ages 20-29 received SSL scores, and the algorithm disproportionately flagged residents in predominantly Black and low-income neighborhoods for increased police attention. Despite claims of being race-neutral, the system reproduced and amplified existing biases in police data. The program was quietly decommissioned in November 2019 after the Office of Inspector General raised concerns about unreliable risk scores, improperly trained personnel, and lack of long-term sustainability.
Domain classification, causal taxonomy, severity scores, and national security assessments were LLM-classified and may contain errors.
Unequal treatment of individuals or groups by AI, often based on race, gender, or other sensitive characteristics, resulting in unfair outcomes and unfair representation of those groups.
AI system
Due to a decision or action made by an AI system
Unintentional
Due to an unexpected outcome from pursuing a goal
Post-deployment
Occurring after the AI model has been trained and deployed