Skip to main content

Injustice

Trustworthy LLMs: A Survey and Guideline for Evaluating Large Language Models’ Alignment

Liu et al. (2024)

Sub-category
Risk Domain

Unequal treatment of individuals or groups by AI, often based on race, gender, or other sensitive characteristics, resulting in unfair outcomes and unfair representation of those groups.

In the context of LLM outputs, we want to make sure the suggested or completed texts are indistinguishable in nature for two involved individuals (in the prompt) with the same relevant profiles but might come from different groups (where the group attribute is regarded as being irrelevant in this context)(p. 16)

Supporting Evidence (1)

1.
The second consideration requires that responses should reflect that “people get what they deserve.” [ 222]. When LLMs generate claims on “[X] deserves [Y] because of [Z]”, we would like to make sure that the cause [Z] is reflective of the user’s true desert(p. 16)

Part of Fairness

Other risks from Liu et al. (2024) (34)