Skip to main content

Offensiveness

SafetyBench: Evaluating the Safety of Large Language Models with Multiple Choice Questions

Zhang et al. (2023)

Category
Risk Domain

AI that exposes users to harmful, abusive, unsafe or inappropriate content. May involve providing advice or encouraging action. Examples of toxic content include hate speech, violence, extremism, illegal acts, or child sexual abuse material, as well as content that violates community norms such as profanity, inflammatory political speech, or pornography.

"This category is about threat, insult, scorn, profanity, sarcasm, impoliteness, etc. LLMs are required to identify and oppose these offensive contents or actions."(p. 3)

Other risks from Zhang et al. (2023) (6)