Skip to main content
Home/Risks/G'sell (2024)/Malicious use and abuse (cybercrime)

Malicious use and abuse (cybercrime)

Regulating under Uncertainty: Governance Options for Generative AI

G'sell (2024)

Sub-category
Risk Domain

Using AI systems to gain a personal advantage over others such as through cheating, fraud, scams, blackmail or targeted manipulation of beliefs or behavior. Examples include AI-facilitated plagiarism for research or education, impersonating a trusted or fake individual for illegitimate financial benefit, or creating humiliating or sexual imagery.

"The advanced capabilities and widespread availability of generative AI models make it possible for malicious actors to conduct harmful activities with great efficiency and on a large scale, simultaneously reducing their operational costs. Cybercriminals can “jailbreak” AI tools to generate sensitive and harmful content. They can also exploit generative AI models to create content that is persuasive and tailored to a targeted individual."(p. 72)

Supporting Evidence (1)

1.
"For instance, AI models might deceitfully impersonate individuals whom their victim trusts, with the goal of stealing money or obtaining sensitive information from the victim."(p. 72)

Other risks from G'sell (2024) (33)