Skip to main content
Home/Risks/Liu et al. (2024)/Preference Bias

Preference Bias

Trustworthy LLMs: A Survey and Guideline for Evaluating Large Language Models’ Alignment

Liu et al. (2024)

Sub-category
Risk Domain

Unequal treatment of individuals or groups by AI, often based on race, gender, or other sensitive characteristics, resulting in unfair outcomes and unfair representation of those groups.

LLMs are exposed to vast groups of people, and their political biases may pose a risk of manipulation of socio-political processes

Supporting Evidence (2)

1.
Some researchers [ 260] express a concern that AI takes a stance on matters that scientific evidence cannot conclusively justify, with examples such as abortion, immigration, monarchy, and the death penalty etc. We think that the text generated by LLMs should be neutral and factual, rather than promoting ideological beliefs.(p. 18)
2.
Such preference bias goes beyond the scope of political, scientific, and societal matters. When asked about preferences over certain products (e.g. books, movies, or music) we also desire LLMs to stay factual, instead of promoting biased opinions(p. 18)

Part of Fairness

Other risks from Liu et al. (2024) (34)