Skip to main content

Hallucination

Trustworthy LLMs: A Survey and Guideline for Evaluating Large Language Models’ Alignment

Liu et al. (2024)

Sub-category
Risk Domain

AI systems that inadvertently generate or spread incorrect or deceptive information, which can lead to inaccurate beliefs in users and undermine their autonomy. Humans that make decisions based on false beliefs can experience physical, emotional or material harms

LLMs can generate content that is nonsensical or unfaithful to the provided source content with appeared great confidence, known as hallucination(p. 10)

Supporting Evidence (1)

1.
"There is a distinction between hallucination and misinformation. Misinformation mostly implies wrong or biased answers and can often be caused by bad inputs of information, but hallucination may consist of fabricated contents that conflict with the source content (i.e. intrinsic hallucination) or cannot be verified from the existing sources (i.e. extrinsic hallucination)."(p. 10)

Part of Reliability

Other risks from Liu et al. (2024) (34)