Skip to main content
Home/Risks/Cui et al. (2024)/False Recall of Memorized Information

False Recall of Memorized Information

Risk Taxonomy, Mitigation, and Assessment Benchmarks of Large Language Model Systems

Cui et al. (2024)

Sub-category
Risk Domain

AI systems that inadvertently generate or spread incorrect or deceptive information, which can lead to inaccurate beliefs in users and undermine their autonomy. Humans that make decisions based on false beliefs can experience physical, emotional or material harms

"Although LLMs indeed memorize the queried knowledge, they may fail to recall the corresponding information [122]. That is because LLMs can be confused by co-occurance patterns [123], positional patterns [124], duplicated data [125]–[127] and similar named entities [113]."(p. 8)

Part of Hallucinations

Other risks from Cui et al. (2024) (49)