Misinformation
Generating Harms - Generative AI's impact and paths forwards
Electronic Privacy Information Centre (2023)
AI systems that inadvertently generate or spread incorrect or deceptive information, which can lead to inaccurate beliefs in users and undermine their autonomy. Humans that make decisions based on false beliefs can experience physical, emotional or material harms
"The phenomenon of inaccurate outputs by text-generating large language models like Bard or ChatGPT has already been widely documented. Even without the intent to lie or mislead, these generative AI tools can produce harmful misinformation. The harm is exacerbated by the polished and typically well-written style that AI generated text follows and the inclusion among true facts, which can give falsehoods a veneer of legitimacy. As reported in the Washington Post, for example, a law professor was included on an AI-generated “list of legal scholars who had sexually harassed someone,” even when no such allegation existed.10"(p. 4)
Part of Information Manipulation
Other risks from Electronic Privacy Information Centre (2023) (21)
Information Manipulation
4.1 Disinformation, surveillance, and influence at scaleInformation Manipulation > Scams
4.3 Fraud, scams, and targeted manipulationInformation Manipulation > Disinformation
4.1 Disinformation, surveillance, and influence at scaleInformation Manipulation > Security
4.2 Cyberattacks, weapon development or use, and mass harmInformation Manipulation > Clickbait and feeding the surveillance advertising ecosystem
3.2 Pollution of information ecosystem and loss of consensus realityHarassment, Impersonation, and Extortion
4.3 Fraud, scams, and targeted manipulation