BackFairness
Fairness
Risk Domain
Accuracy and effectiveness of AI decisions and actions are dependent on group membership, where decisions in AI system design and biased training data lead to unequal outcomes, reduced benefits, increased effort, and alienation of users.
This challenge appears when the learning model leads to a decision that is biased to some sensitive attributes... data itself could be biased, which results in unfair decisions. Therefore, this problem should be solved on the data level and as a preprocessing step
Entity— Who or what caused the harm
Intent— Whether the harm was intentional or accidental
Timing— Whether the risk is pre- or post-deployment
Other risks from Saghiri et al. (2022) (15)
Energy Consumption
6.6 Environmental harmAI systemUnintentionalPre-deployment
Data Issues
1.1 Unfair discrimination and misrepresentationAI systemUnintentionalOther
Robustness and Reliability
7.3 Lack of capability or robustnessAI systemUnintentionalPost-deployment
Cheating and Deception
7.2 AI possessing dangerous capabilitiesAI systemUnintentionalPost-deployment
Security
2.2 AI system security vulnerabilities and attacksHumanIntentionalOther
Privacy
2.1 Compromise of privacy by leaking or correctly inferring sensitive informationAI systemOtherPre-deployment