Skip to main content
BackTechnical vulnerabilities (Robustness - unexpected behaviour)
Home/Risks/G'sell (2024)/Technical vulnerabilities (Robustness - unexpected behaviour)

Technical vulnerabilities (Robustness - unexpected behaviour)

Regulating under Uncertainty: Governance Options for Generative AI

G'sell (2024)

Sub-category
Risk Domain

AI systems that fail to perform reliably or effectively under varying conditions, exposing them to errors and failures that can have significant consequences, especially in critical applications or areas that require moral reasoning.

"There is no assurance that generative AI models will consistently behave as their developers and users intend. Unwanted content is not necessarily due to intentional adversarial behavior. Generative AI models can unexpectedly produce potentially harmful content, including materials that are racist, discriminatory, or sexually explicit, or that promote violence, terrorism, or hate."(p. 61)

Supporting Evidence (1)

1.
"For instance, in February 2024, ChatGPT experienced a notable incident in which the model began generating nonsensical responses. For example, a simple question like, “What is a computer?” led ChatGPT to switch to Spanglish or generate incoherent phrases in the responses.227"

Part of Technical and operational risks

Other risks from G'sell (2024) (33)