Skip to main content
BackDiscrimination and Stereotype Reproduction
Home/Risks/Maham & Küspert (2023)/Discrimination and Stereotype Reproduction

Discrimination and Stereotype Reproduction

Governing General Purpose AI: A Comprehensive Map of Unreliability, Misuse and Systemic Risks

Maham & Küspert (2023)

Sub-category
Risk Domain

Unequal treatment of individuals or groups by AI, often based on race, gender, or other sensitive characteristics, resulting in unfair outcomes and unfair representation of those groups.

"General purpose AI models interpret and respond to inputs based on their training data, potentially causing Discrimination and Stereotype Reproduction. Since they are “black-box” models, the exact mechanism behind decisions remains opaque and attempts to mitigate harmful outputs are not fully reliable yet. These models have the capacity to influence a multitude of downstream applications, decisions, and processes, thereby affecting many individuals simultaneously. The extent of this impact could outstrip the range of any single human or group of humans, amplifying the potential consequences of embedded biases or stereotypes."(p. 19)

Supporting Evidence (2)

1.
"While human discrimination and stereotype reproduction are well-researched and established phenomena, and while AI systems have the potential to reduce these issues, the advent of general purpose AI models simultaneously introduces a different scale of impact of such biases. Integrated into decision-making processes, these models may unintentionally disadvantage certain groups or individuals based on protected characteristics.80 While unfair decisions made by an AI system can occur independent of existing biases in society, and instead on entirely arbitrary characteristics such as the video background in a job interview81, general purpose AI models, by the nature of their training on internet data, without countermeasures, are likely to perpetuate already existing biases. For example, if a model trained on biased data correlates higher professional qualifications with certain racial or ethnic groups, it could unfairly disadvantage other groups. The decisions or recommendations made by a biased technology, given its potentially widespread deployment, risk reinforcing and perpetuating systemic discrimination against already marginalised groups."(p. 19)
2.
"General purpose AI models also play an increasingly significant role in content creation across education82 and academia83, entertainment84, and media sectors85 through which their propensity to reproduce stereotypes could have a propound influence. If these models are trained on data that reflects societal stereotypes — such as associating STEM fields predominantly with men and literature predominantly with women — they risk reproducing and reinforcing these stereotypes in the content they generate. This can have a ripple effect, influencing societal perceptions and opportunities on a large scale. In an experiment, images generated by the general purpose AI model Stable Diffusion by Stability AI were compared to U.S. demographics for each occupation. It was found that while women make up 39% of doctors, only 7% of the image results depicted perceived women. The trend continued for the occupation of judges, with women making up 34% but seemingly only depicted in 3% of images.86"(p. 19)

Other risks from Maham & Küspert (2023) (10)