Skip to main content
BackParadigm & Distribution Shifts
Home/Risks/Liu et al. (2024)/Paradigm & Distribution Shifts

Paradigm & Distribution Shifts

Trustworthy LLMs: A Survey and Guideline for Evaluating Large Language Models’ Alignment

Liu et al. (2024)

Sub-category
Risk Domain

AI systems that inadvertently generate or spread incorrect or deceptive information, which can lead to inaccurate beliefs in users and undermine their autonomy. Humans that make decisions based on false beliefs can experience physical, emotional or material harms

Knowledge bases that LLMs are trained on continue to shift... questions such as “who scored the most points in NBA history" or “who is the richest person in the world" might have answers that need to be updated over time, or even in real-time(p. 27)

Part of Robustness

Other risks from Liu et al. (2024) (34)