Jailbreaking
Vulnerabilities that can be exploited in AI systems, software development toolchains, and hardware, resulting in unauthorized access, data and privacy breaches, or system manipulation causing unsafe outputs or behavior.
"Jailbreaking aims to bypass or remove restrictions and safety filters placed on a GenAI model completely (Chao et al., 2023; Shen et al., 2023). This gives the actor free rein to generate any output, regardless of its content being harmful, biassed, or offensive. All three of these are tactics that manipulate the model into producing harmful outputs against its design. The difference is that prompt injections and adversarial inputs usually seek to steer the model towards producing harmful or incorrect outputs from one query, whereas jailbreaking seeks to dismantle a model’s safety mechanisms in their entirety."(p. 8)
Part of Misuse tactics to compromise GenAI systems (Model integrity)
Other risks from Marchal2024 (22)
Misuse tactics that exploit GenAI capabilities (Realistic depiction of human likeness)
4.3 Fraud, scams, and targeted manipulationMisuse tactics that exploit GenAI capabilities (Realistic depiction of human likeness) > Impersonation
4.3 Fraud, scams, and targeted manipulationMisuse tactics that exploit GenAI capabilities (Realistic depiction of human likeness) > Appropriated Likeness
4.3 Fraud, scams, and targeted manipulationMisuse tactics that exploit GenAI capabilities (Realistic depiction of human likeness) > Sockpuppeting
4.1 Disinformation, surveillance, and influence at scaleMisuse tactics that exploit GenAI capabilities (Realistic depiction of human likeness) > Non-consensual intimate imagery (NCII)
4.3 Fraud, scams, and targeted manipulationMisuse tactics that exploit GenAI capabilities (Realistic depiction of human likeness) > Child sexual abuse material (CSAM)
4.3 Fraud, scams, and targeted manipulation