BackExplainability & Transparency
Explainability & Transparency
Risk Domain
Challenges in understanding or explaining the decision-making processes of AI systems, which can lead to mistrust, difficulty in enforcing compliance standards or holding relevant actors accountable for harms, and the inability to identify and correct errors.
"The feasibility of understanding and interpreting an AI system's decisions and actions, and the openness of the developer about the data used, algorithms employed, and decisions made. Lack of these elements can create risks of misuse, misinterpretation, and lack of accountability."(p. 23048)
Entity— Who or what caused the harm
Intent— Whether the harm was intentional or accidental
Timing— Whether the risk is pre- or post-deployment
Other risks from Sherman & Eisenberg (2023) (8)
Abuse & Misuse
4.2 Cyberattacks, weapon development or use, and mass harmHumanIntentionalPost-deployment
Compliance
6.5 Governance failureAI systemOtherPost-deployment
Environmental & Societal Impact
6.0 Socioeconomic & EnvironmentalOtherOtherPost-deployment
Fairness & Bias
1.1 Unfair discrimination and misrepresentationAI systemUnintentionalOther
Long-term & Existential Risk
7.1 AI pursuing its own goals in conflict with human goals or valuesOtherOtherPost-deployment
Performance & Robustness
7.3 Lack of capability or robustnessAI systemUnintentionalPost-deployment