Skip to main content
Home/Risks/IBM2025/Toxic output

Toxic output

Sub-category
Risk Domain

AI that exposes users to harmful, abusive, unsafe or inappropriate content. May involve providing advice or encouraging action. Examples of toxic content include hate speech, violence, extremism, illegal acts, or child sexual abuse material, as well as content that violates community norms such as profanity, inflammatory political speech, or pornography.

"Toxic output occurs when the model produces hateful, abusive, and profane (HAP) or obscene content. This also includes behaviors like bullying."

Supporting Evidence (1)

1.
"Hateful, abusive, and profane (HAP) or obscene content can adversely impact and harm people interacting with the model."

Other risks from IBM2025 (63)