German journalists investigated the Schufa credit scoring algorithm and found it discriminates based on age and gender, provides poor scores to people with only positive credit history, and uses outdated algorithm versions that can unfairly deny financial services.
German investigative journalists from Der Spiegel and Bavarian Public Broadcaster conducted a year-long investigation into Schufa, Germany's most influential credit scoring agency, analyzing over 2,000 consumer credit reports. Schufa operates a proprietary algorithmic system that assigns credit scores to over 67 million German consumers, influencing access to banking, insurance, housing, and other services. The investigation revealed multiple problems: the algorithm privileges older and female individuals while discriminating against younger men, assigns poor risk ratings to consumers with only positive credit history, and relies on minimal data (sometimes just 3 data points) to make consequential decisions. The system has used four different algorithm versions since 1997, with companies still using outdated versions that can produce significantly different scores for the same person. Specific cases included a 20-year-old employee rated as 'elevated to high risk' despite having only positive credit information, and Sven Drewert being denied a credit limit increase for vacation despite perfect payment history. The journalists found that one in eight consumers with only positive credit markers still received 'elevated' or 'high' risk ratings. The investigation used crowdsourced data from consumers who requested their free credit reports, though only 10% of 30,000 people who requested reports actually uploaded them due to the cumbersome paper-based system.
Domain classification, causal taxonomy, severity scores, and national security assessments were LLM-classified and may contain errors.
Unequal treatment of individuals or groups by AI, often based on race, gender, or other sensitive characteristics, resulting in unfair outcomes and unfair representation of those groups.
AI system
Due to a decision or action made by an AI system
Unintentional
Due to an unexpected outcome from pursuing a goal
Post-deployment
Occurring after the AI model has been trained and deployed