Oregon's Department of Human Services stopped using an algorithm that helped decide which families are investigated by child welfare social workers after analysis showed it created racial disparities in investigations.
Oregon's Department of Human Services announced in May that it would discontinue use of its Safety at Screening Tool algorithm by the end of June 2023. The algorithm was implemented in 2018 and was inspired by Pennsylvania's Allegheny Family Screening Tool. It was designed to predict the risk that children face of winding up in foster care or being investigated in the future by generating numerical risk scores for hotline workers to consider when deciding whether to send social workers to investigate families. Oregon officials had modified their version to use only internal child welfare data and included a 'fairness correction' to address racial bias. The decision to stop using the algorithm came after extensive analysis showed it contributed to disparities in which families were investigated for child abuse and neglect. The move followed an Associated Press review of Pennsylvania's similar tool that found it had flagged a disproportionate number of Black children for mandatory neglect investigations. U.S. Senator Ron Wyden had expressed concerns about racial bias in Oregon's system. The algorithm will be replaced by a new Structured Decision Making model that aligns with other child welfare jurisdictions. A second Oregon algorithm designed to help decide when foster care children can be reunified with families remains on hiatus due to inadequate data.
Domain classification, causal taxonomy, severity scores, and national security assessments were LLM-classified and may contain errors.
Unequal treatment of individuals or groups by AI, often based on race, gender, or other sensitive characteristics, resulting in unfair outcomes and unfair representation of those groups.
AI system
Due to a decision or action made by an AI system
Unintentional
Due to an unexpected outcome from pursuing a goal
Post-deployment
Occurring after the AI model has been trained and deployed
No population impact data reported.