Verifiability
Challenges in understanding or explaining the decision-making processes of AI systems, which can lead to mistrust, difficulty in enforcing compliance standards or holding relevant actors accountable for harms, and the inability to identify and correct errors.
In many applications of AI-based systems such as medical healthcare and military services, the lack of verification of code may not be tolerable... due to some characteristics such as the non-linear and complex structure of AI-based solutions, existing solutions have been generally considered “black boxes”, not providing any information about what exactly makes them appear in their predictions and decision-making processes.
Other risks from Saghiri et al. (2022) (15)
Energy Consumption
6.6 Environmental harmData Issues
1.1 Unfair discrimination and misrepresentationRobustness and Reliability
7.3 Lack of capability or robustnessCheating and Deception
7.2 AI possessing dangerous capabilitiesSecurity
2.2 AI system security vulnerabilities and attacksPrivacy
2.1 Compromise of privacy by leaking or correctly inferring sensitive information