State Farm's AI-powered fraud detection system algorithmically flagged Black homeowners' insurance claims for additional scrutiny, resulting in discriminatory treatment where Black customers faced longer processing times, more paperwork requirements, and additional interactions compared to white customers.
State Farm, the nation's largest homeowner insurance provider, used an AI-powered fraud detection system developed by FRISS (a Netherlands-based AI firm) through Duck Creek Technologies to process insurance claims. The system assigned 'risk scores' to policyholders using demographic data about neighborhoods including 'degree of urbanization,' crime statistics, and social media data. A study conducted over nine months in 2021 by NYU School of Law and Fairmark Partners surveyed over 800 State Farm customers across six Midwestern states. The study found that Black homeowners were 20% more likely to need at least three interactions with State Farm representatives before claim approval, 39% more likely to be asked for additional paperwork, and significantly less likely to have claims paid within one month compared to white customers. Only 30% of Black customers had claims paid within one month versus 39% of white customers. The lawsuit, filed in December 2022 and seeking class-action status, represents potentially 10,000 Black customers and seeks hundreds of millions in damages. Plaintiff Jacqueline Huskey experienced this discrimination firsthand when her June 2021 hail damage claim took four months for partial approval, requiring 20-30 interactions with State Farm, and she ultimately paid $7,000 out of pocket while receiving only $4,687 from the insurer.
Domain classification, causal taxonomy, severity scores, and national security assessments were LLM-classified and may contain errors.
Unequal treatment of individuals or groups by AI, often based on race, gender, or other sensitive characteristics, resulting in unfair outcomes and unfair representation of those groups.
AI system
Due to a decision or action made by an AI system
Unintentional
Due to an unexpected outcome from pursuing a goal
Post-deployment
Occurring after the AI model has been trained and deployed