Skip to main content

False information

Emerging Risks and Mitigations for Public Chatbots: LILAC v1

Stanley & Lettie (2024)

Category
Risk Domain

AI systems that inadvertently generate or spread incorrect or deceptive information, which can lead to inaccurate beliefs in users and undermine their autonomy. Humans that make decisions based on false beliefs can experience physical, emotional or material harms

"The chatbot outputs information that contradicts known facts, authoritative sources, or provided source documents (also known as hallucination)."(p. 6)

Other risks from Stanley & Lettie (2024) (28)