BackFairness
Category
Risk Domain
Unequal treatment of individuals or groups by AI, often based on race, gender, or other sensitive characteristics, resulting in unfair outcomes and unfair representation of those groups.
"The general principle of equal treatment requires that an AI system upholds the principle of fairness, both ethically and legally. This means that the same facts are treated equally for each person unless there is an objective justification for unequal treatment."(p. 10)
Entity— Who or what caused the harm
Intent— Whether the harm was intentional or accidental
Timing— Whether the risk is pre- or post-deployment
Other risks from Steimers & Schneider (2022) (7)
Privacy
2.0 Privacy & SecurityAI systemOtherOther
Degree of Automation and Control
7.1 AI pursuing its own goals in conflict with human goals or valuesAI systemOtherPost-deployment
Complexity of the Intended Task and Usage Environment
7.3 Lack of capability or robustnessAI systemUnintentionalPost-deployment
Degree of Transparency and Explainability
7.4 Lack of transparency or interpretabilityAI systemUnintentionalPost-deployment
Security
2.2 AI system security vulnerabilities and attacksOtherOtherPost-deployment
System Hardware
7.3 Lack of capability or robustnessAI systemUnintentionalOther