Harm to individuals through fake content
Using AI systems to gain a personal advantage over others such as through cheating, fraud, scams, blackmail or targeted manipulation of beliefs or behavior. Examples include AI-facilitated plagiarism for research or education, impersonating a trusted or fake individual for illegitimate financial benefit, or creating humiliating or sexual imagery.
"General- purpose AI systems can be used to increase the scale and sophistication of scams and fraud, for example through general- purpose AI- enhanced ‘phishing’ attacks. General- purpose AI can be used to generate fake compromising content featuring individuals without their consent, posing threats to individual privacy and reputation."(p. 41)
Supporting Evidence (1)
"General- purpose AI can amplify the risk of frauds and scams, increasing both their volume and their sophistication. Their volume can be increased because general- purpose AI facilitates the generation of scam content at greater speeds and scale than previously possible. Their sophistication can be increased because general- purpose AI facilitates the creation of more convincing and personalised scam content at scale (340, 341). General- purpose AI language models can be used to design and deploy ‘phishing’ attacks in which attackers deceive people into sharing passwords or other sensitive information (342). This can include spear- phishing, a type of phishing campaign that is personalised to the target, and business email compromise, a type of cybercrime where the malicious user tries to trick someone into sending money or sharing confidential information. Research has found that between January to February 2023, there was a 135% increase in ‘novel social engineering attacks’ in a sample of email accounts (343*), which is thought to correspond to the widespread adoption of ChatGPT."(p. 41)
Part of Malicious Use Risks
Other risks from Bengio et al. (2024) (14)
Malicious Use Risks
4.0 Malicious Actors & MisuseMalicious Use Risks > Disinformation and manipulation of public opinion
4.1 Disinformation, surveillance, and influence at scaleMalicious Use Risks > Cyber offence
4.2 Cyberattacks, weapon development or use, and mass harmMalicious Use Risks > Dual use science risks
4.2 Cyberattacks, weapon development or use, and mass harmRisks from Malfunctions
7.0 AI System Safety, Failures & LimitationsRisks from Malfunctions > Risks from product functionality issues
5.1 Overreliance and unsafe use