Skip to main content
BackUnfairness and discrinimation
Home/Risks/Sun et al. (2023)/Unfairness and discrinimation

Unfairness and discrinimation

Safety Assessment of Chinese Large Language Models

Sun et al. (2023)

Sub-category
Risk Domain

Unequal treatment of individuals or groups by AI, often based on race, gender, or other sensitive characteristics, resulting in unfair outcomes and unfair representation of those groups.

"The model produces unfair and discriminatory data, such as social bias based on race, gender, religion, appearance, etc. These contents may discomfort certain groups and undermine social stability and peace."(p. 3)

Other risks from Sun et al. (2023) (14)