The UK Department for Work and Pensions (DWP) used an AI algorithm to flag disabled people for benefit fraud investigations, resulting in invasive surveillance and psychological harm to vulnerable claimants.
The UK Department for Work and Pensions (DWP) deployed an AI algorithm system to identify potential benefit fraud cases among welfare claimants, particularly targeting disabled people receiving Personal Independence Payments (PIP) and other disability benefits. The system uses 'cutting-edge artificial intelligence' and 'data matching' techniques to flag claimants for investigation, gathering information from airlines, PayPal, social clubs, employers, and social media. Ellen, a disabled woman with chronic illness, was subjected to six weeks of covert surveillance including filming at a charity fundraising event for her condition. The DWP presented evidence of her daily activities as proof of fraud, misinterpreting her doctor-recommended exercise and upgraded flights (needed for leg room due to her condition) as signs of fraudulent claims. The Greater Manchester Coalition of Disabled People (GMCDP) reports that a 'huge percentage' of their members have been targeted by this system, with investigations lasting up to a year and causing severe psychological distress. The algorithm's methodology remains secret despite legal challenges, with DWP officials claiming it uses 'data matching' rather than true algorithmic decision-making, though they acknowledge using machine learning and AI technologies for fraud detection.
Domain classification, causal taxonomy, severity scores, and national security assessments were LLM-classified and may contain errors.
Accuracy and effectiveness of AI decisions and actions are dependent on group membership, where decisions in AI system design and biased training data lead to unequal outcomes, reduced benefits, increased effort, and alienation of users.
AI system
Due to a decision or action made by an AI system
Intentional
Due to an expected outcome from pursuing a goal
Post-deployment
Occurring after the AI model has been trained and deployed