Jailbreak in LLM Malicious Use - Prompt Attacks
Vulnerabilities that can be exploited in AI systems, software development toolchains, and hardware, resulting in unauthorized access, data and privacy breaches, or system manipulation causing unsafe outputs or behavior.
"In the prompting and reasoning phase, dialog can push LLMs into confused or overly compliant states, raising the risk of producing harmful outputs when confronted with harmful questions. Most of the jailbreak methods in this phase are black-boxed and can be categorized into four main groups based on the type of method: Prompt Injection [154], Role Play, Adversarial Prompting, and Prompt Form Transformation."(p. 22)
Other risks from Wang et al. (2025) (11)
Privacy - Membership Inference Attack (MIA)
2.2 AI system security vulnerabilities and attacksPrivacy - Data Extraction Attack (DEA)
2.2 AI system security vulnerabilities and attacksPrivacy - Prompt Inversion Attack (PIA)
2.2 AI system security vulnerabilities and attacksPrivacy - Attribute Inference Attack (AIA)
2.2 AI system security vulnerabilities and attacksPrivacy - Model Extraction Attack (MEA)
2.2 AI system security vulnerabilities and attacksHallucination
3.1 False or misleading information