The Vera-2R tool designed to predict future crime in terrorist offenders incorrectly classified autism spectrum disorders as a risk factor for reoffending despite having no empirical basis for this classification.
The Vera-2R tool is an AI system designed to predict future crime in terrorist offenders that was used by both federal and NSW governments in Australia. An independent report by Australian National University academics Dr Emily Corner and Dr Helen Taylor, completed for the Department of Home Affairs in May 2020, found serious flaws with the tool. The report found that autism spectrum disorders and non-compliance with conditions or supervision were included as risk factors despite having no empirical evidence supporting their inclusion. The most comprehensive systematic review of drivers of radicalisation and terrorist behaviour found not a single piece of empirical evidence supporting these two factors. The Corner report found the tool was 'extremely poor' at predicting risk and had 'potentially serious implications for validity and reliability.' Despite receiving this critical report in May 2020, the federal government used the tool 14 times afterward and continued to use it to justify harsh post-sentence orders for offenders, including ongoing detention. The government did not disclose the report to lawyers of convicted offenders, its own experts, or the NSW government. The report found that almost 60% of the cited evidence base for factor development was not empirical, and less than half of the works cited accurately reflected the recorded texts.
Domain classification, causal taxonomy, severity scores, and national security assessments were LLM-classified and may contain errors.
Unequal treatment of individuals or groups by AI, often based on race, gender, or other sensitive characteristics, resulting in unfair outcomes and unfair representation of those groups.
AI system
Due to a decision or action made by an AI system
Unintentional
Due to an unexpected outcome from pursuing a goal
Post-deployment
Occurring after the AI model has been trained and deployed
No population impact data reported.