Hallucinations
AI systems that inadvertently generate or spread incorrect or deceptive information, which can lead to inaccurate beliefs in users and undermine their autonomy. Humans that make decisions based on false beliefs can experience physical, emotional or material harms
"The inclusion of erroneous information in the outputs from AI systems is not new. Some have cautioned against the introduction of false structures in X-ray or MRI images, and others have warned about made-up academic references. However, as ChatGPT-type tools become available to the general population, the scale of the problem may increase dramatically. Furthermore, it is compounded by the fact that these conversational AIs present true and false information with the same apparent “confidence” instead of declining to answer when they cannot ensure correctness. With less knowledgeable people, this can lead to the heightening of misinformation and potentially dangerous situations. Some have already led to court cases.'(p. 99)
Other risks from Cunha & Estima (2023) (5)
Broken systems
1.1 Unfair discrimination and misrepresentationIntellectual property rights violations
6.3 Economic and cultural devaluation of human effortPrivacy and regulation violations
2.1 Compromise of privacy by leaking or correctly inferring sensitive informationEnabling malicious actors and harmful actions
4.0 Malicious Actors & MisuseEnvironmental and socioeconomic harms
6.6 Environmental harm