A French welfare office's automated data mining system incorrectly classified a welfare recipient's employment status, leading to an erroneous debt notification of 542 euros that was later canceled after human review.
On March 17, 2021, a French welfare office requested additional documentation from a welfare recipient following welfare reform changes. The next day, the recipient received an automated notification stating they owed 542 euros, with 60 euros to be deducted monthly from future payments. The welfare office's automated software had incorrectly analyzed the recipient's file, considering only their freelance work status while ignoring their salaried employment, resulting in an incorrect benefit calculation. The recipient contacted the welfare office to explain the error, and a caseworker confirmed that 'the software' had automatically analyzed the file using 'parameters' and that the case had become 'too complex' when freelance work was added, causing the system to reset entirely. A welfare office employee later called to confirm the debt was canceled and the issue closed. French welfare offices have used data mining systems since 2012, which are responsible for three-quarters of all benefit controls, with housing assistance being the most frequently controlled benefit in 2021.
Domain classification, causal taxonomy, severity scores, and national security assessments were LLM-classified and may contain errors.
AI systems that fail to perform reliably or effectively under varying conditions, exposing them to errors and failures that can have significant consequences, especially in critical applications or areas that require moral reasoning.
AI system
Due to a decision or action made by an AI system
Unintentional
Due to an unexpected outcome from pursuing a goal
Post-deployment
Occurring after the AI model has been trained and deployed