Data Security Risk
Generating Harms - Generative AI's impact and paths forwards
Electronic Privacy Information Centre (2023)
Using AI systems to gain a personal advantage over others such as through cheating, fraud, scams, blackmail or targeted manipulation of beliefs or behavior. Examples include AI-facilitated plagiarism for research or education, impersonating a trusted or fake individual for illegitimate financial benefit, or creating humiliating or sexual imagery.
"Just as every other type of individual and organization has explored possible use cases for generative AI products, so too have malicious actors. This could take the form of facilitating or scaling up existing threat methods, for example drafting actual malware code,87 business email compromise attempts,88 and phishing attempts.89 This could also take the form of new types of threat methods, for example mining information fed into the AI’s learning model dataset90 or poisoning the learning model data set with strategically bad data.91 We should also expect that there will be new attack vectors that we have not even conceived of yet made possible or made more broadly accessible by generative AI."(p. 30)
Other risks from Electronic Privacy Information Centre (2023) (21)
Information Manipulation
4.1 Disinformation, surveillance, and influence at scaleInformation Manipulation > Scams
4.3 Fraud, scams, and targeted manipulationInformation Manipulation > Disinformation
4.1 Disinformation, surveillance, and influence at scaleInformation Manipulation > Misinformation
3.1 False or misleading informationInformation Manipulation > Security
4.2 Cyberattacks, weapon development or use, and mass harmInformation Manipulation > Clickbait and feeding the surveillance advertising ecosystem
3.2 Pollution of information ecosystem and loss of consensus reality