The UK Home Office used an AI algorithm from 2015 to stream visa applications into traffic-light categories based on nationality, which rights groups successfully challenged in court as discriminatory, leading to its suspension in August 2020.
The UK Home Office deployed an AI-powered 'streaming tool' from 2015 that automatically sorted visa applications into green, amber, or red categories using a traffic-light system. The algorithm used nationality as a key factor, maintaining a secret list of 'suspect nationalities' that would automatically receive higher risk scores. Applications flagged as red received intensive scrutiny, were approached with more skepticism, took longer to process, and were much more likely to be refused. The Joint Council for the Welfare of Immigrants (JCWI) and advocacy group Foxglove launched a legal challenge in October 2019, arguing the system constituted racial discrimination under the Equality Act 2010. The system suffered from feedback loops where biased enforcement statistics reinforced which countries remained on the suspect list. Out of more than 3.3 million visa applications processed by June 2019, 2.9 million people were granted entry. Following the legal challenge, the Home Office agreed to suspend the algorithm on August 7, 2020, and committed to redesigning the system by October 2020 with equality and data protection impact assessments.
Domain classification, causal taxonomy, severity scores, and national security assessments were LLM-classified and may contain errors.
Unequal treatment of individuals or groups by AI, often based on race, gender, or other sensitive characteristics, resulting in unfair outcomes and unfair representation of those groups.
AI system
Due to a decision or action made by an AI system
Unintentional
Due to an unexpected outcome from pursuing a goal
Post-deployment
Occurring after the AI model has been trained and deployed