Skip to main content
Home/Risks/Bengio et al. (2024)/Harm to individuals through fake content

Harm to individuals through fake content

International Scientific Report on the Safety of Advanced AI

Bengio et al. (2024)

Sub-category
Risk Domain

Using AI systems to gain a personal advantage over others such as through cheating, fraud, scams, blackmail or targeted manipulation of beliefs or behavior. Examples include AI-facilitated plagiarism for research or education, impersonating a trusted or fake individual for illegitimate financial benefit, or creating humiliating or sexual imagery.

"General- purpose AI systems can be used to increase the scale and sophistication of scams and fraud, for example through general- purpose AI- enhanced ‘phishing’ attacks. General- purpose AI can be used to generate fake compromising content featuring individuals without their consent, posing threats to individual privacy and reputation."(p. 41)

Supporting Evidence (1)

1.
"General- purpose AI can amplify the risk of frauds and scams, increasing both their volume and their sophistication. Their volume can be increased because general- purpose AI facilitates the generation of scam content at greater speeds and scale than previously possible. Their sophistication can be increased because general- purpose AI facilitates the creation of more convincing and personalised scam content at scale (340, 341). General- purpose AI language models can be used to design and deploy ‘phishing’ attacks in which attackers deceive people into sharing passwords or other sensitive information (342). This can include spear- phishing, a type of phishing campaign that is personalised to the target, and business email compromise, a type of cybercrime where the malicious user tries to trick someone into sending money or sharing confidential information. Research has found that between January to February 2023, there was a 135% increase in ‘novel social engineering attacks’ in a sample of email accounts (343*), which is thought to correspond to the widespread adoption of ChatGPT."(p. 41)

Part of Malicious Use Risks

Other risks from Bengio et al. (2024) (14)