Lack of explainability
Challenges in understanding or explaining the decision-making processes of AI systems, which can lead to mistrust, difficulty in enforcing compliance standards or holding relevant actors accountable for harms, and the inability to identify and correct errors.
"The explainability of AI systems based on so-called black-box models is often limited. This opaqueness of AI systems can prevent developers from detecting shortcomings in the data or the model itself and decrease the performance and safety levels of the AI system."(p. 10)
Other risks from Schnitzer2024 (24)
Inadequate specification of ODD
7.3 Lack of capability or robustnessInappropriate degree of automation
7.2 AI possessing dangerous capabilitiesInadequate planning of performance requirements
7.3 Lack of capability or robustnessInsufficient AI development documentation
7.4 Lack of transparency or interpretabilityInappropriate degree of transparency to end users
7.4 Lack of transparency or interpretabilityChoice of untrustworthy data source
7.0 AI System Safety, Failures & Limitations