Skip to main content

Miscalibration

Trustworthy LLMs: A Survey and Guideline for Evaluating Large Language Models’ Alignment

Liu et al. (2024)

Sub-category
Risk Domain

AI systems that inadvertently generate or spread incorrect or deceptive information, which can lead to inaccurate beliefs in users and undermine their autonomy. Humans that make decisions based on false beliefs can experience physical, emotional or material harms

over-confidence in topics where objective answers are lacking, as well as in areas where their inherent limitations should caution against LLMs’ uncertainty (e.g. not as accurate as experts)... ack of awareness regarding their outdated knowledge base about the question, leading to confident yet erroneous response(p. 12)

Part of Reliability

Other risks from Liu et al. (2024) (34)