Skip to main content
BackFactually incorrect content (inaccuracies and fabricated sources)
Home/Risks/G'sell (2024)/Factually incorrect content (inaccuracies and fabricated sources)

Factually incorrect content (inaccuracies and fabricated sources)

Regulating under Uncertainty: Governance Options for Generative AI

G'sell (2024)

Sub-category
Risk Domain

AI systems that inadvertently generate or spread incorrect or deceptive information, which can lead to inaccurate beliefs in users and undermine their autonomy. Humans that make decisions based on false beliefs can experience physical, emotional or material harms

"One of the most vexing problems associated with AI models is that they occasionally present false information as if it is factual—often with authoritative-sounding text and fabricated quotes and sources. This unpredictable phenomenon of generating false information is well known to AI researchers, who have termed such erroneous output with the euphemistic label “hallucination.” "(p. 64)

Supporting Evidence (1)

1.
"The relative harm of false or misleading information can vary dramatically. Bad advice in response to a culinary query might lead to an unenjoyable meal or upset stomach, while erroneous responses to a medical question could have catastrophic consequences."(p. 63)

Part of Technical and operational risks

Other risks from G'sell (2024) (33)