The Edmonton Police Service used AI DNA phenotyping technology to generate a facial composite of a Black male suspect in a 2019 sexual assault case, but removed the image after facing criticism about racial bias and scientific validity.
In October 2022, the Edmonton Police Service (EPS) in Alberta, Canada used DNA phenotyping technology for the first time to generate a facial composite of a suspect in an unsolved 2019 sexual assault case. The police enlisted Parabon NanoLabs, a Virginia-based DNA technology company, to use their Snapshot DNA Phenotyping Service to predict the physical appearance and ancestry of the suspect from DNA evidence. The AI system generated predictions for ancestry, eye color, hair color, skin color, freckling, and face shape, combining these to create a composite image depicting what the person of interest might have looked like at 25 years old with an average BMI of 22. The generated image showed a Black male suspect. On October 4, 2022, EPS shared the AI-generated image on Twitter and in a press release, describing it as a 'last resort' after all other investigative avenues had been exhausted. However, the release immediately sparked controversy and criticism from genetics experts and the public who questioned the scientific validity of DNA phenotyping and raised concerns about racial bias. Dr. Adam Rutherford, a genetics lecturer at University College London, criticized the technology as 'dangerous snake oil,' stating that accurate facial profiles and pigmentation predictions cannot be made from DNA alone. Following the backlash, EPS quickly removed the image from their website and social media platforms on October 6, 2022, and issued an apology acknowledging that they had not adequately considered the risks of providing 'far too broad a characterization from within a racialized community.'
Domain classification, causal taxonomy, severity scores, and national security assessments were LLM-classified and may contain errors.
Unequal treatment of individuals or groups by AI, often based on race, gender, or other sensitive characteristics, resulting in unfair outcomes and unfair representation of those groups.
AI system
Due to a decision or action made by an AI system
Intentional
Due to an expected outcome from pursuing a goal
Post-deployment
Occurring after the AI model has been trained and deployed
No population impact data reported.