Skip to main content
Home/Risks/Stanley & Lettie (2024)/Biased statements and recommendations

Biased statements and recommendations

Emerging Risks and Mitigations for Public Chatbots: LILAC v1

Stanley & Lettie (2024)

Category
Risk Domain

Unequal treatment of individuals or groups by AI, often based on race, gender, or other sensitive characteristics, resulting in unfair outcomes and unfair representation of those groups.

"The chatbot gives information that, while not obviously false or harmful, could lead to biased decision-making."(p. 6)

Supporting Evidence (1)

1.
Negative outcomes: "Perpetuating disparities [not in AIDB; 21, 22 in Appendix E]"(p. 17)

Other risks from Stanley & Lettie (2024) (28)