BackLack of Interpretability
Lack of Interpretability
Risk Domain
Challenges in understanding or explaining the decision-making processes of AI systems, which can lead to mistrust, difficulty in enforcing compliance standards or holding relevant actors accountable for harms, and the inability to identify and correct errors.
Due to the black box nature of most machine learning models, users typically are not able to understand the reasoning behind the model decisions(p. 22)
Entity— Who or what caused the harm
Intent— Whether the harm was intentional or accidental
Timing— Whether the risk is pre- or post-deployment
Part of Explainability & Reasoning
Other risks from Liu et al. (2024) (34)
Reliability
3.1 False or misleading informationAI systemUnintentionalPost-deployment
Reliability > Misinformation
3.1 False or misleading informationAI systemUnintentionalPost-deployment
Reliability > Hallucination
3.1 False or misleading informationAI systemUnintentionalPost-deployment
Reliability > Inconsistency
7.3 Lack of capability or robustnessAI systemUnintentionalPost-deployment
Reliability > Miscalibration
3.1 False or misleading informationAI systemUnintentionalPost-deployment
Reliability > Sychopancy
3.1 False or misleading informationAI systemIntentionalPost-deployment