Risks from bias and underrepresentation
Unequal treatment of individuals or groups by AI, often based on race, gender, or other sensitive characteristics, resulting in unfair outcomes and unfair representation of those groups.
"The outputs and impacts of general- purpose AI systems can be biased with respect to various aspects of human identity, including race, gender, culture, age, and disability. This creates risks in high- stakes domains such as healthcare, job recruitment, and financial lending. General- purpose AI systems are primarily trained on language and image datasets that disproportionately represent English- speaking and Western cultures, increasing the potential for harm to individuals not represented well by this data."(p. 49)
Supporting Evidence (3)
"AI systems can demonstrate bias as a result of skewed training data, choices made during model development, or the premature deployment of flawed systems. Despite extensive research, reliable methods to fully mitigate any discrimination remain elusive. There are particular concerns over the tendency of advanced general- purpose AI systems to replicate and amplify bias present within their training data (446). This poses a significant risk of discrimination in high- impact applications such as job recruitment, financial lending, and healthcare (447). In these areas biased decisions resulting from general- purpose AI systems outputs can have profoundly negative consequences for individuals, potentially limiting employment prospects (448, 449), hindering upward financial mobility, and restricting access to essential healthcare services (450, 451)."(p. 49)
"Harmful bias and underrepresentation in AI systems have been challenges since well before the increased attention to general- purpose AI. They remain an issue with general- purpose AI, and will likely be a major challenge with general- purpose AI systems for the foreseeable future. Decisions by an AI might be biased if their decision- making is skewed based on protected characteristics, such as gender, race, etc. They might hence be discriminatory when this bias informs decisions to the disadvantage of members of these protected groups; thereby creating harm to fairness. This section discusses present and future risks resulting from bias and underrepresentation risks in AI. Because of the rich history of research in this space, this section explores research both on narrow AI and general- purpose AI."(p. 49)
"There are several well- documented cases of AI systems displaying discriminatory behaviour based on race, gender, age, and disability status, causing substantial harm. Given increasingly widespread adoption of AI systems across various sectors, such behaviour can perpetuate various types of bias, including race, gender, age, and disability. This can cause serious harm if these systems are entrusted with increasingly high- stakes decisions which can have severe consequences for individuals."(p. 49)
Part of Risks from Malfunctions
Other risks from Bengio et al. (2024) (14)
Malicious Use Risks
4.0 Malicious Actors & MisuseMalicious Use Risks > Harm to individuals through fake content
4.3 Fraud, scams, and targeted manipulationMalicious Use Risks > Disinformation and manipulation of public opinion
4.1 Disinformation, surveillance, and influence at scaleMalicious Use Risks > Cyber offence
4.2 Cyberattacks, weapon development or use, and mass harmMalicious Use Risks > Dual use science risks
4.2 Cyberattacks, weapon development or use, and mass harmRisks from Malfunctions
7.0 AI System Safety, Failures & Limitations