Skip to main content
Home/Risks/IBM2025/Output bias

Output bias

Sub-category
Risk Domain

Unequal treatment of individuals or groups by AI, often based on race, gender, or other sensitive characteristics, resulting in unfair outcomes and unfair representation of those groups.

"Generated content might unfairly represent certain groups or individuals."

Supporting Evidence (1)

1.
"Bias can harm users of the AI models and magnify existing discriminatory behaviors."

Other risks from IBM2025 (63)