Toxicity in LLM Malicious Use
AI that exposes users to harmful, abusive, unsafe or inappropriate content. May involve providing advice or encouraging action. Examples of toxic content include hate speech, violence, extremism, illegal acts, or child sexual abuse material, as well as content that violates community norms such as profanity, inflammatory political speech, or pornography.
"Toxicity in LLMs refers to the generation of harmful, offensive, or inappropriate content that can cause harm to individuals or groups. Both explicit and implicit forms of toxicity can be generated by LLMs, posing significant risks to society. Explicit toxicity encompasses a wide range of negative behaviors, including hate speech, harassment, cyberbullying, rude, and disrespectful comments, derogatory language, as well as allocational harms [2, 62, 90]. Besides, implicit toxicity does not involve overtly harmful language but may manifest through subtle forms such as sarcasm, irony, and humor, making it more difficult to detect [103, 213]."(p. 18)
Supporting Evidence (1)
"Generating and disseminating both explicit and implicit toxic content massively and rapidly to the public can lead to significant harm, including mental health issues, self-harm, and even suicide [52]. Therefore, it is crucial to develop effective methods for eliminating toxic content generated by LLMs to ensure their safe and responsible use."(p. 19)
Other risks from Wang et al. (2025) (11)
Privacy - Membership Inference Attack (MIA)
2.2 AI system security vulnerabilities and attacksPrivacy - Data Extraction Attack (DEA)
2.2 AI system security vulnerabilities and attacksPrivacy - Prompt Inversion Attack (PIA)
2.2 AI system security vulnerabilities and attacksPrivacy - Attribute Inference Attack (AIA)
2.2 AI system security vulnerabilities and attacksPrivacy - Model Extraction Attack (MEA)
2.2 AI system security vulnerabilities and attacksHallucination
3.1 False or misleading information