BackBias
Category
Risk Domain
Unequal treatment of individuals or groups by AI, often based on race, gender, or other sensitive characteristics, resulting in unfair outcomes and unfair representation of those groups.
"The AI will only be as good as the data it is trained with. If the data contains bias (and much data does), then the AI will manifest that bias, too."(p. 8)
Entity— Who or what caused the harm
Intent— Whether the harm was intentional or accidental
Timing— Whether the risk is pre- or post-deployment
Other risks from Hogenhout (2021) (12)
Incompetence
7.3 Lack of capability or robustnessAI systemUnintentionalPost-deployment
Loss of privacy
2.1 Compromise of privacy by leaking or correctly inferring sensitive informationHumanIntentionalPost-deployment
Discrimination
1.1 Unfair discrimination and misrepresentationAI systemUnintentionalPost-deployment
Erosion of Society
3.2 Pollution of information ecosystem and loss of consensus realityAI systemUnintentionalPost-deployment
Lack of transparency
7.4 Lack of transparency or interpretabilityAI systemUnintentionalOther
Deception
4.3 Fraud, scams, and targeted manipulationAI systemOtherPost-deployment