The Allegheny Family Screening Tool (AFST), an AI system used by child welfare agencies to predict which families need investigation, was found to produce discriminatory outcomes against Black families and people with disabilities due to biased design decisions and data sources.
The Allegheny Family Screening Tool (AFST) is an AI system deployed by Allegheny County, Pennsylvania since 2016 to help child welfare workers decide whether to investigate neglect allegations. The tool calculates risk scores from 0 to 20 based on predicting the likelihood of child removal within two years, using data from multiple government databases including criminal justice and behavioral health records. An ACLU analysis found that the tool could result in 33% of Black households being labeled 'high risk' compared to only 20% of non-Black households. The system also flags households with people who have disabilities as higher risk due to its reliance on behavioral health databases. The Hackney family, who have developmental disabilities, had their 8-month-old daughter taken into foster care after bringing her to the hospital for dehydration, and suspect the AFST contributed to this decision. Their daughter has remained in foster care for over a year. The U.S. Justice Department is now investigating whether the county's use of the algorithm discriminates against people with disabilities. The tool is used by child welfare agencies in at least 26 states and has been deployed in at least 11 jurisdictions. The system's developers acknowledge making 'rather arbitrary' design decisions during development, and families cannot access their risk scores or challenge the algorithm's assessments.
Domain classification, causal taxonomy, severity scores, and national security assessments were LLM-classified and may contain errors.
Unequal treatment of individuals or groups by AI, often based on race, gender, or other sensitive characteristics, resulting in unfair outcomes and unfair representation of those groups.
AI system
Due to a decision or action made by an AI system
Unintentional
Due to an unexpected outcome from pursuing a goal
Post-deployment
Occurring after the AI model has been trained and deployed