Dartmouth's Geisel School of Medicine used Canvas learning management system data to retroactively track student activity during remote exams without their knowledge, leading to false cheating accusations against 17 medical students based on flawed analysis that mistook automated system activity for human cheating.
In March 2021, Dartmouth's Geisel School of Medicine accused 17 first- and second-year medical students of cheating on remote exams based on Canvas learning management system activity data. The investigation began after a faculty member reported possible cheating in January. The school's Committee on Student Performance and Conduct reviewed Canvas activity during 18 remote exams covering more than 3,000 exam instances since fall 2020. The technology staff developed a system to recognize patterns that might signal cheating, typically showing activity on Canvas course pages followed by study page activity during exams. However, the methodology was flawed because Canvas can automatically generate activity data even when no one is using a device, and students often had dozens of course pages open that they rarely logged out of. Seven of the 17 cases were eventually dismissed, with administrators acknowledging in at least one case that 'automated Canvas processes are likely to have created the data that was seen rather than deliberate activity by the user.' The remaining 10 students faced expulsion, suspension, or course failures with misconduct marks that could end their medical careers. Students reported being given less than 48 hours to respond to charges, not receiving complete data logs, and being advised to plead guilty despite denying cheating. The incident sparked campus protests and faculty criticism of the investigation's methodology and fairness.
Domain classification, causal taxonomy, severity scores, and national security assessments were LLM-classified and may contain errors.
AI systems that fail to perform reliably or effectively under varying conditions, exposing them to errors and failures that can have significant consequences, especially in critical applications or areas that require moral reasoning.
AI system
Due to a decision or action made by an AI system
Unintentional
Due to an unexpected outcome from pursuing a goal
Post-deployment
Occurring after the AI model has been trained and deployed