Skip to main content
BackTechnical vulnerabilities (Robustness - vulnerability to jailbreaking
Home/Risks/G'sell (2024)/Technical vulnerabilities (Robustness - vulnerability to jailbreaking

Technical vulnerabilities (Robustness - vulnerability to jailbreaking

Regulating under Uncertainty: Governance Options for Generative AI

G'sell (2024)

Sub-category
Risk Domain

Vulnerabilities that can be exploited in AI systems, software development toolchains, and hardware, resulting in unauthorized access, data and privacy breaches, or system manipulation causing unsafe outputs or behavior.

"Individuals can manipulate models into performing actions that violate the model’s usage restrictions—a phenomenon known as “jailbreaking.” These manipulations may result in causing the model to perform tasks that the developers have explicitly prohibited (see section 3.2.1.). For instance, users may ask the model to provide information on how to conduct illegal activities— asking for detailed instructions on how to build a bomb or create highly toxic drugs."(p. 62)

Supporting Evidence (1)

1.
"Common forms of malicious attacks231 include: • inputting carefully crafted prompts that are able to navigate around a model’s safeguards,232 • extracting training data (especially sensitive information), • backdooring (negating normal authentication procedures to gain unauthorized access to a system), • data poisoning (intentionally compromising a training dataset to manipulate the operation of a model (see below section 3.1.2.B.3.)), and • exfiltration (the theft or unauthorized removal or movement of data).233"(p. 62)

Part of Technical and operational risks

Other risks from G'sell (2024) (33)