Skip to main content
BackBias and discrimination (value lock and outcome homogenization)
Home/Risks/G'sell (2024)/Bias and discrimination (value lock and outcome homogenization)

Bias and discrimination (value lock and outcome homogenization)

Regulating under Uncertainty: Governance Options for Generative AI

G'sell (2024)

Sub-category
Risk Domain

Unequal treatment of individuals or groups by AI, often based on race, gender, or other sensitive characteristics, resulting in unfair outcomes and unfair representation of those groups.

"Because models are not necessarily retrained to reflect evolving societal views, language models risk “value lock- ins,” which “reifies older, less inclusive understandings.”370 Therefore, the continued use of outdated models may limit the presentation or exploration of alternative perspectives. Moreover, the deployment of identical foundation models by various downstream deployers poses a risk of “outcome homogenization,” creating a potential for homogeneity of bias across broad swathes of society. Identical and widely deployed models with prejudicial training datasets could further entrench existing biases in society. This phenomenon, in turn, has the potential to “institutionalize systemic exclusion and reinforce existing social hierarchies.”(p. 81)

Other risks from G'sell (2024) (33)