Bias and discrimination (value lock and outcome homogenization)
Unequal treatment of individuals or groups by AI, often based on race, gender, or other sensitive characteristics, resulting in unfair outcomes and unfair representation of those groups.
"Because models are not necessarily retrained to reflect evolving societal views, language models risk “value lock- ins,” which “reifies older, less inclusive understandings.”370 Therefore, the continued use of outdated models may limit the presentation or exploration of alternative perspectives. Moreover, the deployment of identical foundation models by various downstream deployers poses a risk of “outcome homogenization,” creating a potential for homogeneity of bias across broad swathes of society. Identical and widely deployed models with prejudicial training datasets could further entrench existing biases in society. This phenomenon, in turn, has the potential to “institutionalize systemic exclusion and reinforce existing social hierarchies.”(p. 81)
Other risks from G'sell (2024) (33)
Technical and operational risks
7.3 Lack of capability or robustnessTechnical and operational risks > Technical vulnerabilities (Robustness - unexpected behaviour)
7.3 Lack of capability or robustnessTechnical and operational risks > Technical vulnerabilities (Robustness - vulnerability to jailbreaking
2.2 AI system security vulnerabilities and attacksTechnical and operational risks > Technical vulnerabilities (The risk of misalignment)
7.1 AI pursuing its own goals in conflict with human goals or valuesTechnical and operational risks > Factually incorrect content (inaccuracies and fabricated sources)
3.1 False or misleading informationTechnical and operational risks > Opacity (the black box problem)
7.4 Lack of transparency or interpretability