AI-powered exam proctoring software used by the University of Toronto exhibited facial recognition bias, causing disproportionate difficulties for BIPOC students during identity verification processes and creating additional stress during exams.
The University of Toronto maintained partnerships with exam monitoring services ProctorU and Examity that use AI-powered facial recognition technology for student identity verification during online exams. Multiple BIPOC students reported experiencing difficulties with the AI systems' ability to recognize their faces and identify their passports during check-in processes. Chelsea Okankwu at Concordia University faced unexpected conflict verifying her identity due to the monitoring software claiming insufficient lighting. Maame Adjoa at UofT reported that the AI system was often unable to identify her passport, requiring manual check-ins with human proctors that consumed five of her 15-minute check-in time. These experiences were corroborated by scores of BIPOC students interviewed by major news outlets. Civil rights attorney Christine Webber noted that those flagged for potentially cheating would be disproportionately African American and Asian students. This bias aligns with research by UofT student Deb Raji, who found that AI facial-recognition technology has built-in bias when data sets used to train AI models underrepresent BIPOC individuals. Six Democratic senators wrote an open letter highlighting issues of privacy, accessibility, and equity with exam monitoring software. The University of Toronto acknowledged equity concerns but continued using these services, with only a small number of units currently using ProctorU.
Domain classification, causal taxonomy, severity scores, and national security assessments were LLM-classified and may contain errors.
Accuracy and effectiveness of AI decisions and actions are dependent on group membership, where decisions in AI system design and biased training data lead to unequal outcomes, reduced benefits, increased effort, and alienation of users.
AI system
Due to a decision or action made by an AI system
Unintentional
Due to an unexpected outcome from pursuing a goal
Post-deployment
Occurring after the AI model has been trained and deployed