BackHallucination
Hallucination
Risk Domain
AI systems that inadvertently generate or spread incorrect or deceptive information, which can lead to inaccurate beliefs in users and undermine their autonomy. Humans that make decisions based on false beliefs can experience physical, emotional or material harms
LLMs can generate content that is nonsensical or unfaithful to the provided source content with appeared great confidence, known as hallucination(p. 10)
Entity— Who or what caused the harm
Intent— Whether the harm was intentional or accidental
Timing— Whether the risk is pre- or post-deployment
Supporting Evidence (1)
1.
"There is a distinction between hallucination and misinformation. Misinformation mostly implies wrong or biased answers and can often be caused by bad inputs of information, but hallucination may consist of fabricated contents that conflict with the source content (i.e. intrinsic hallucination) or cannot be verified from the existing sources (i.e. extrinsic hallucination)."(p. 10)
Part of Reliability
Other risks from Liu et al. (2024) (34)
Reliability
3.1 False or misleading informationAI systemUnintentionalPost-deployment
Reliability > Misinformation
3.1 False or misleading informationAI systemUnintentionalPost-deployment
Reliability > Inconsistency
7.3 Lack of capability or robustnessAI systemUnintentionalPost-deployment
Reliability > Miscalibration
3.1 False or misleading informationAI systemUnintentionalPost-deployment
Reliability > Sychopancy
3.1 False or misleading informationAI systemIntentionalPost-deployment
Safety
1.2 Exposure to toxic contentAI systemOtherPost-deployment