The Dutch government's AI-powered child benefits fraud detection system wrongly accused 26,000 families of fraud from 2013-2021, causing financial ruin and family separation, with disproportionate impact on ethnic minorities.
From 2013 to 2021, the Dutch tax authority deployed a self-learning algorithm to detect child benefits fraud that systematically flagged families as fraudulent based on discriminatory risk profiles. The system used variables including dual nationality and ethnicity as risk indicators, falsely labeling over 26,000 families as fraudsters. Families were forced to pay back thousands of euros in benefits they legitimately received, with many driven to bankruptcy, divorce, and mental health crises. Children were removed from homes and some victims committed suicide. The algorithm discriminated particularly against people of color and immigrants, with the tax authority applying the Pareto principle assuming 80% of investigated families were guilty. The scandal led to the resignation of Prime Minister Mark Rutte's entire cabinet in January 2021. A separate report details Rotterdam's welfare fraud algorithm that scored recipients on 315 variables including subjective assessments, discriminating against women, ethnic minorities, and vulnerable groups. The system performed little better than random selection but was used to investigate thousands annually until suspended in 2021.
Domain classification, causal taxonomy, severity scores, and national security assessments were LLM-classified and may contain errors.
Unequal treatment of individuals or groups by AI, often based on race, gender, or other sensitive characteristics, resulting in unfair outcomes and unfair representation of those groups.
AI system
Due to a decision or action made by an AI system
Unintentional
Due to an unexpected outcome from pursuing a goal
Post-deployment
Occurring after the AI model has been trained and deployed