Knowledge Gaps
AI systems that inadvertently generate or spread incorrect or deceptive information, which can lead to inaccurate beliefs in users and undermine their autonomy. Humans that make decisions based on false beliefs can experience physical, emotional or material harms
"Since the training corpora of LLMs can not contain all possible world knowledge [114]–[119], and it is challenging for LLMs to grasp the long-tail knowledge within their training data [120], [121], LLMs inherently possess knowledge boundaries [107]. Therefore, the gap between knowledge involved in an input prompt and knowledge embedded in the LLMs can lead to hallucinations"(p. 7)
Part of Hallucinations
Other risks from Cui et al. (2024) (49)
Harmful Content
1.2 Exposure to toxic contentHarmful Content > Bias
1.1 Unfair discrimination and misrepresentationHarmful Content > Toxicity
1.2 Exposure to toxic contentHarmful Content > Privacy Leakage
2.1 Compromise of privacy by leaking or correctly inferring sensitive informationUntruthful Content
3.1 False or misleading informationUntruthful Content > Factuality Errors
3.1 False or misleading information