Jailbreak in LLM Malicious Use - Backdoor Attack
Vulnerabilities that can be exploited in AI systems, software development toolchains, and hardware, resulting in unauthorized access, data and privacy breaches, or system manipulation causing unsafe outputs or behavior.
"However, there are still ones who can leave holes in the training dataset, making LLMs appear safe on average, but generate harmful content under other specific conditions. This kind of attack can be categorized as "backdoor attack". Evan et al. developed a backdoor model that behaves as expected when trained, but exhibits different and potentially harmful behavior when deployed [81]. The results show that these backdoor behaviors persist even after multiple security training techniques are applied."(p. 21)
Other risks from Wang et al. (2025) (11)
Privacy - Membership Inference Attack (MIA)
2.2 AI system security vulnerabilities and attacksPrivacy - Data Extraction Attack (DEA)
2.2 AI system security vulnerabilities and attacksPrivacy - Prompt Inversion Attack (PIA)
2.2 AI system security vulnerabilities and attacksPrivacy - Attribute Inference Attack (AIA)
2.2 AI system security vulnerabilities and attacksPrivacy - Model Extraction Attack (MEA)
2.2 AI system security vulnerabilities and attacksHallucination
3.1 False or misleading information