Technical vulnerabilities (Robustness - unexpected behaviour)
AI systems that fail to perform reliably or effectively under varying conditions, exposing them to errors and failures that can have significant consequences, especially in critical applications or areas that require moral reasoning.
"There is no assurance that generative AI models will consistently behave as their developers and users intend. Unwanted content is not necessarily due to intentional adversarial behavior. Generative AI models can unexpectedly produce potentially harmful content, including materials that are racist, discriminatory, or sexually explicit, or that promote violence, terrorism, or hate."(p. 61)
Supporting Evidence (1)
"For instance, in February 2024, ChatGPT experienced a notable incident in which the model began generating nonsensical responses. For example, a simple question like, “What is a computer?” led ChatGPT to switch to Spanglish or generate incoherent phrases in the responses.227"
Part of Technical and operational risks
Other risks from G'sell (2024) (33)
Technical and operational risks
7.3 Lack of capability or robustnessTechnical and operational risks > Technical vulnerabilities (Robustness - vulnerability to jailbreaking
2.2 AI system security vulnerabilities and attacksTechnical and operational risks > Technical vulnerabilities (The risk of misalignment)
7.1 AI pursuing its own goals in conflict with human goals or valuesTechnical and operational risks > Factually incorrect content (inaccuracies and fabricated sources)
3.1 False or misleading informationTechnical and operational risks > Opacity (the black box problem)
7.4 Lack of transparency or interpretabilityTechnical and operational risks > Opacity (industry opacity)
6.4 Competitive dynamics