Factually incorrect content (inaccuracies and fabricated sources)
AI systems that inadvertently generate or spread incorrect or deceptive information, which can lead to inaccurate beliefs in users and undermine their autonomy. Humans that make decisions based on false beliefs can experience physical, emotional or material harms
"One of the most vexing problems associated with AI models is that they occasionally present false information as if it is factual—often with authoritative-sounding text and fabricated quotes and sources. This unpredictable phenomenon of generating false information is well known to AI researchers, who have termed such erroneous output with the euphemistic label “hallucination.” "(p. 64)
Supporting Evidence (1)
"The relative harm of false or misleading information can vary dramatically. Bad advice in response to a culinary query might lead to an unenjoyable meal or upset stomach, while erroneous responses to a medical question could have catastrophic consequences."(p. 63)
Part of Technical and operational risks
Other risks from G'sell (2024) (33)
Technical and operational risks
7.3 Lack of capability or robustnessTechnical and operational risks > Technical vulnerabilities (Robustness - unexpected behaviour)
7.3 Lack of capability or robustnessTechnical and operational risks > Technical vulnerabilities (Robustness - vulnerability to jailbreaking
2.2 AI system security vulnerabilities and attacksTechnical and operational risks > Technical vulnerabilities (The risk of misalignment)
7.1 AI pursuing its own goals in conflict with human goals or valuesTechnical and operational risks > Opacity (the black box problem)
7.4 Lack of transparency or interpretabilityTechnical and operational risks > Opacity (industry opacity)
6.4 Competitive dynamics