BackDiscrimination
Sub-category
Risk Domain
Unequal treatment of individuals or groups by AI, often based on race, gender, or other sensitive characteristics, resulting in unfair outcomes and unfair representation of those groups.
This is the risk of an ML system encoding stereotypes of or performing disproportionately poorly for some demographics/social groups.(p. 13)
Entity— Who or what caused the harm
Intent— Whether the harm was intentional or accidental
Timing— Whether the risk is pre- or post-deployment
Part of Second-Order Risks
Other risks from Tan, Taeihagh & Baxter (2022) (17)
First-Order Risks
7.0 AI System Safety, Failures & LimitationsOtherOtherOther
First-Order Risks > Application
7.0 AI System Safety, Failures & LimitationsHumanIntentionalPost-deployment
First-Order Risks > Misapplication
7.3 Lack of capability or robustnessHumanIntentionalPost-deployment
First-Order Risks > Algorithm
7.3 Lack of capability or robustnessAI systemUnintentionalPre-deployment
First-Order Risks > Training & validation data
7.0 AI System Safety, Failures & LimitationsHumanOtherPre-deployment
First-Order Risks > Robustness
7.3 Lack of capability or robustnessAI systemUnintentionalPost-deployment