Skip to main content
Home/Risks/Liu et al. (2024)/Cultural Insensitivity

Cultural Insensitivity

Trustworthy LLMs: A Survey and Guideline for Evaluating Large Language Models’ Alignment

Liu et al. (2024)

Sub-category
Risk Domain

AI that exposes users to harmful, abusive, unsafe or inappropriate content. May involve providing advice or encouraging action. Examples of toxic content include hate speech, violence, extremism, illegal acts, or child sexual abuse material, as well as content that violates community norms such as profanity, inflammatory political speech, or pornography.

it is important to build high-quality locally collected datasets that reflect views from local users to align a model’s value system(p. 26)

Part of Social Norm

Other risks from Liu et al. (2024) (34)