Arkansas implemented an algorithmic system in 2016 to determine Medicaid home care hours for disabled residents, resulting in dramatic cuts to care for hundreds of people including those with cerebral palsy, leading to lawsuits that revealed coding errors and implementation flaws in the algorithm.
In 2016, Arkansas replaced its human-based assessment system for allocating Medicaid home care hours with an algorithmic tool developed by InterRAI, a nonprofit coalition of health researchers. The algorithm analyzed about 60 health descriptions and symptoms to categorize people and determine care hours. Tammy Dobbs, who has cerebral palsy and previously received 56 hours per week of care, saw her hours cut to 32 per week without explanation. Hundreds of other program beneficiaries experienced similar dramatic reductions in care. Legal Aid of Arkansas filed a federal lawsuit in 2016 representing Bradley Ledgerwood and Ethel Jacobs, arguing the state failed to properly notify people about the change and provided no effective way to challenge decisions. During court proceedings, it was discovered that the algorithm contained multiple errors: the software vendor had mistakenly used a version that didn't account for diabetes issues, affecting about 19% of beneficiaries, and cerebral palsy wasn't properly coded, causing incorrect calculations for hundreds of people. The algorithm developer Brant Fries acknowledged the implementation was problematic, saying Arkansas officials didn't follow his recommendations for gradual transitions or grandfathering existing recipients. A federal judge ultimately ruled the state had insufficiently implemented the program, and Arkansas officials made some procedural changes while planning to migrate to a new system.
Domain classification, causal taxonomy, severity scores, and national security assessments were LLM-classified and may contain errors.
AI systems that fail to perform reliably or effectively under varying conditions, exposing them to errors and failures that can have significant consequences, especially in critical applications or areas that require moral reasoning.
AI system
Due to a decision or action made by an AI system
Unintentional
Due to an unexpected outcome from pursuing a goal
Post-deployment
Occurring after the AI model has been trained and deployed