Skip to main content
Home/Risks/Wang et al. (2025)/Toxicity in LLM Malicious Use

Toxicity in LLM Malicious Use

A Survey on Responsible LLMs: Inherent Risk, Malicious Use, and Mitigation Strategy

Wang et al. (2025)

Sub-category
Risk Domain

AI that exposes users to harmful, abusive, unsafe or inappropriate content. May involve providing advice or encouraging action. Examples of toxic content include hate speech, violence, extremism, illegal acts, or child sexual abuse material, as well as content that violates community norms such as profanity, inflammatory political speech, or pornography.

"Toxicity in LLMs refers to the generation of harmful, offensive, or inappropriate content that can cause harm to individuals or groups. Both explicit and implicit forms of toxicity can be generated by LLMs, posing significant risks to society. Explicit toxicity encompasses a wide range of negative behaviors, including hate speech, harassment, cyberbullying, rude, and disrespectful comments, derogatory language, as well as allocational harms [2, 62, 90]. Besides, implicit toxicity does not involve overtly harmful language but may manifest through subtle forms such as sarcasm, irony, and humor, making it more difficult to detect [103, 213]."(p. 18)

Supporting Evidence (1)

1.
"Generating and disseminating both explicit and implicit toxic content massively and rapidly to the public can lead to significant harm, including mental health issues, self-harm, and even suicide [52]. Therefore, it is crucial to develop effective methods for eliminating toxic content generated by LLMs to ensure their safe and responsible use."(p. 19)

Other risks from Wang et al. (2025) (11)