BackInappropriate degree of transparency to end users
Inappropriate degree of transparency to end users
Risk Domain
Challenges in understanding or explaining the decision-making processes of AI systems, which can lead to mistrust, difficulty in enforcing compliance standards or holding relevant actors accountable for harms, and the inability to identify and correct errors.
"The transparency to end users of the AI system increases the user’s trust in the AI application. If not adequately integrated into the design, this might prevent the proper operation and cause potential misuse of the AI application."(p. 9)
Entity— Who or what caused the harm
Intent— Whether the harm was intentional or accidental
Timing— Whether the risk is pre- or post-deployment
Other risks from Schnitzer2024 (24)
Inadequate specification of ODD
7.3 Lack of capability or robustnessHumanUnintentionalPre-deployment
Inappropriate degree of automation
7.2 AI possessing dangerous capabilitiesAI systemUnintentionalPost-deployment
Inadequate planning of performance requirements
7.3 Lack of capability or robustnessHumanUnintentionalPre-deployment
Insufficient AI development documentation
7.4 Lack of transparency or interpretabilityHumanOtherPre-deployment
Choice of untrustworthy data source
7.0 AI System Safety, Failures & LimitationsHumanUnintentionalPre-deployment
Lack of data understanding
7.0 AI System Safety, Failures & LimitationsHumanUnintentionalPre-deployment