Skip to main content

Harms to Minor

Trustworthy LLMs: A Survey and Guideline for Evaluating Large Language Models’ Alignment

Liu et al. (2024)

Sub-category
Risk Domain

AI that exposes users to harmful, abusive, unsafe or inappropriate content. May involve providing advice or encouraging action. Examples of toxic content include hate speech, violence, extremism, illegal acts, or child sexual abuse material, as well as content that violates community norms such as profanity, inflammatory political speech, or pornography.

LLMs can be leveraged to solicit answers that contain harmful content to children and youth(p. 15)

Supporting Evidence (1)

1.
LLMs can be leveraged to generate dangerous and age-inappropriate content, such as violent and sex-explicit content that is accessible to underage user(p. 15)

Part of Safety

Other risks from Liu et al. (2024) (34)