Degree of Transparency and Explainability
Challenges in understanding or explaining the decision-making processes of AI systems, which can lead to mistrust, difficulty in enforcing compliance standards or holding relevant actors accountable for harms, and the inability to identify and correct errors.
"Transparency is the characteristic of a system that describes the degree to which appropriate information about the system is communicated to relevant stakeholders, whereas explainability describes the property of an AI system to express important factors influencing the results of the AI system in a way that is understandable for humans....Information about the model underlying the decision-making process is relevant for transparency. Systems with a low degree of transparency can pose risks in terms of their fairness, security and accountability. "(p. 19)
Other risks from Steimers & Schneider (2022) (7)
Fairness
1.1 Unfair discrimination and misrepresentationPrivacy
2.0 Privacy & SecurityDegree of Automation and Control
7.1 AI pursuing its own goals in conflict with human goals or valuesComplexity of the Intended Task and Usage Environment
7.3 Lack of capability or robustnessSecurity
2.2 AI system security vulnerabilities and attacksSystem Hardware
7.3 Lack of capability or robustness