Explainability & Reasoning
Challenges in understanding or explaining the decision-making processes of AI systems, which can lead to mistrust, difficulty in enforcing compliance standards or holding relevant actors accountable for harms, and the inability to identify and correct errors.
The ability to explain the outputs to users and reason correctly(p. 8)
Sub-categories (3)
Lack of Interpretability
Due to the black box nature of most machine learning models, users typically are not able to understand the reasoning behind the model decisions
7.4 Lack of transparency or interpretabilityLimited Logical Reasoning
LLMs can provide seemingly sensible but ultimately incorrect or invalid justifications when answering questions
7.3 Lack of capability or robustnessLimited Causal Reasoning
Causal reasoning makes inferences about the relationships between events or states of the world, mostly by identifying cause-effect relationships
7.3 Lack of capability or robustnessOther risks from Liu et al. (2024) (34)
Reliability
3.1 False or misleading informationReliability > Misinformation
3.1 False or misleading informationReliability > Hallucination
3.1 False or misleading informationReliability > Inconsistency
7.3 Lack of capability or robustnessReliability > Miscalibration
3.1 False or misleading informationReliability > Sychopancy
3.1 False or misleading information