Amsterdam courts ruled that Uber and Ola used automated decision-making systems to suspend and penalize drivers without meaningful human oversight, violating GDPR rights to transparency and appeal.
Between 2018-2020, ride-hailing platforms Uber and Ola implemented automated systems for managing drivers that led to suspensions and wage deductions without adequate human oversight. The Amsterdam District Court found that Ola used an entirely automated system to make deductions from driver earnings, while Uber fired drivers based on algorithmic 'fraud detection' without providing evidence or appeal rights. Four drivers (three from the UK, one from Portugal) were dismissed after Uber's systems allegedly detected 'fraudulent activity,' with decisions made remotely at an Uber office in Krakow. The drivers were denied access to their personal data, fraud probability scores, passenger ratings, and explanations of algorithmic decision-making processes. In 2023, the Amsterdam Court of Appeals ruled largely in favor of the drivers, finding that several automated processes qualified as automated decision-making under Article 22 of GDPR, including ride assignment, price calculation, driver rating, fraud probability scoring, and account deactivation. The court ordered both companies to provide transparency about their algorithmic management systems and pay case costs, with potential daily fines of several thousand euros for non-compliance.
Domain classification, causal taxonomy, severity scores, and national security assessments were LLM-classified and may contain errors.
Challenges in understanding or explaining the decision-making processes of AI systems, which can lead to mistrust, difficulty in enforcing compliance standards or holding relevant actors accountable for harms, and the inability to identify and correct errors.
AI system
Due to a decision or action made by an AI system
Unintentional
Due to an unexpected outcome from pursuing a goal
Post-deployment
Occurring after the AI model has been trained and deployed