Skip to main content

Bias

Sub-category
Risk Domain

Unequal treatment of individuals or groups by AI, often based on race, gender, or other sensitive characteristics, resulting in unfair outcomes and unfair representation of those groups.

"General-purpose AI systems can amplify social and political biases, causing concrete harm. They frequently display biases with respect to race, gender, culture, age, disability, political opinion, or other aspects of human identity. This can lead to discriminatory outcomes including unequal resource allocation, reinforcement of stereotypes, and systematic neglect of certain groups or viewpoints."(p. 92)

Supporting Evidence (2)

1.
"Bias in AI has many sources, like poor training data and system design choices. General- purpose AI is primarily trained on language and image datasets that disproportionately represent English- speaking and Western cultures. This contributes to biased output. Certain design choices, such as content filtering techniques used to align systems with particular worldviews, can also contribute to biased output."(p. 92)
2.
"Bias: Systematic errors in algorithmic systems that favour certain groups or worldviews and often create unfair outcomes for some people. Bias can have multiple sources, including errors in algorithmic design, unrepresentative or otherwise flawed datasets, or pre- existing social inequalities. ● Discrimination: The unfair treatment of individuals or groups based on their attributes, such as race, gender, age, religion, or other protected characteristics."(p. 92)

Other risks from Bengio2025 (13)