Skip to main content
Home/Risks/Maham & Küspert (2023)/Ideological Homogenization from Value Embedding

Ideological Homogenization from Value Embedding

Governing General Purpose AI: A Comprehensive Map of Unreliability, Misuse and Systemic Risks

Maham & Küspert (2023)

Sub-category
Risk Domain

Accuracy and effectiveness of AI decisions and actions are dependent on group membership, where decisions in AI system design and biased training data lead to unequal outcomes, reduced benefits, increased effort, and alienation of users.

"The increasing integration of general purpose AI models into every-day life raises concerns around their embedded normative values. The reach of a small number of AI models to a large number of people around the world can make these value judgements unprecedently impactful, potentially leading to increased ideological homogenization."(p. 38)

Supporting Evidence (3)

1.
"During development of general purpose AI models, to mitigate output with unintended biases, developers retrain their models based on normative values. Since there are no neutral, universally agreed upon values, decisions over such sensitive topics lie in the hands of the developers. These values could be unrepresentative, or an overly stationary and simplified representation of global cultural values and changing social views, potentially distorting social perspectives.205"(p. 38)
2.
"The risks associated with value embedding are not only a function of the concrete set of values that is implemented, but also the process and transparency around it, raising concerns about ideological power concentration. The phenomenon of value embedding describes the process in which the developer of a general purpose AI model inscribes certain values and principles into the model, influencing its behaviour. If the specific guidelines are not made transparent, societal discussion and reflections on those values cannot take place."(p. 39)
3.
"We can already see evidence for these concerns in popular general purpose AI based systems like OpenAI’s ChatGPT in the form of responses that indicate preferences for certain values that are not necessarily transparent and representative. For example, when asked why rent caps, a limit on the amount of rent that tenants can be charged, are bad, ChatGPT based on GPT-3.5 simply provided a list of reasons against rent caps. When asked why rent caps are good, it argues both pro and contra.209 This shows that the answer to a simple question is not neutral, but instead reveals how output is influenced by entrenched values that have been fed to the model at some point. A study found that ChatGPT most closely aligns with the German Green party on the Wahl-O-Mat test, a questionnaire to determine one’s most suited political affiliation in Germany. These results stayed constant across multiple trials."(p. 39)

Part of Systemic Risks

Other risks from Maham & Küspert (2023) (10)