Skip to main content

Knowledge Gaps

Risk Taxonomy, Mitigation, and Assessment Benchmarks of Large Language Model Systems

Cui et al. (2024)

Sub-category
Risk Domain

AI systems that inadvertently generate or spread incorrect or deceptive information, which can lead to inaccurate beliefs in users and undermine their autonomy. Humans that make decisions based on false beliefs can experience physical, emotional or material harms

"Since the training corpora of LLMs can not contain all possible world knowledge [114]–[119], and it is challenging for LLMs to grasp the long-tail knowledge within their training data [120], [121], LLMs inherently possess knowledge boundaries [107]. Therefore, the gap between knowledge involved in an input prompt and knowledge embedded in the LLMs can lead to hallucinations"(p. 7)

Part of Hallucinations

Other risks from Cui et al. (2024) (49)