The UK's exam regulator Ofqual deployed an algorithm to determine A-level and GCSE grades for students unable to sit exams due to COVID-19, which downgraded nearly 40% of results and disproportionately disadvantaged students from state schools compared to private schools, leading to widespread protests and eventual government reversal to teacher-assessed grades.
In 2020, due to COVID-19 school closures, the UK's Office of Qualifications and Examinations Regulation (Ofqual) developed an algorithm to determine A-level and GCSE exam grades for students unable to sit traditional exams. The algorithm used teacher rankings of students within each subject and school, combined with each school's historical performance over the previous three years (2017-2019), to generate standardized grades. The system was designed to prevent grade inflation and maintain consistency with previous years' results. When A-level results were announced on August 13, 2020, nearly 40% of grades were downgraded from teacher predictions, with 35.6% reduced by one grade and 3.3% by two grades. The algorithm disproportionately affected state schools, where comprehensive schools saw grade increases of only 2% compared to 4.7% for private schools. This occurred because private schools typically had smaller class sizes (under 15 students per subject received teacher-assessed grades) and historically better performance records that the algorithm favored. Following widespread protests, legal challenges, and criticism from MPs, the government reversed course on August 17, 2020, announcing that students would receive teacher-assessed grades instead. Similar issues occurred with GCSE results, affecting approximately 4.6 million grades (97% of total) that were determined solely by the algorithm. The incident led to university admission chaos, with 15,000 students suddenly qualifying for their first-choice universities, and calls for resignations of Education Secretary Gavin Williamson and Ofqual officials.
Domain classification, causal taxonomy, severity scores, and national security assessments were LLM-classified and may contain errors.
Unequal treatment of individuals or groups by AI, often based on race, gender, or other sensitive characteristics, resulting in unfair outcomes and unfair representation of those groups.
AI system
Due to a decision or action made by an AI system
Intentional
Due to an expected outcome from pursuing a goal
Post-deployment
Occurring after the AI model has been trained and deployed