Risks from malicious use
-(p. 62)
Sub-categories (4)
Harm to individuals through fake content
"Malicious actors can use general- purpose AI to generate fake content that harms individuals in a targeted way. For example, they can use such fake content for scams, extortion, psychological manipulation, generation of non- consensual intimate imagery (NCII) and child sexual abuse material (CSAM), or targeted sabotage of individuals and organisations."
4.3 Fraud, scams, and targeted manipulationManipulation of public opinion
"Malicious actors can use general- purpose AI to generate fake content such as text, images, or videos, for attempts to manipulate public opinion. Researchers believe that if successful, such attempts could have several harmful consequences."
4.1 Disinformation, surveillance, and influence at scaleCyber offence
"Attackers are beginning to use general- purpose AI for offensive cyber operations, presenting growing but currently limited risks. Current systems have demonstrated capabilities in low- and medium- complexity cybersecurity tasks, with state- sponsored threat actors actively exploring AI to survey target systems. Malicious actors of varying skill levels can leverage these capabilities against people, organisations, and critical infrastructure such as power grids."
4.2 Cyberattacks, weapon development or use, and mass harmBiological and chemical attacks
"Growing evidence shows general- purpose AI advances beneficial to science while also lowering some barriers to chemical and biological weapons development for both novices and experts. New language models can generate step- by- step technical instructions for creating pathogens and toxins that surpass plans written by experts with a PhD and surface information that experts struggle to find online, though their practical utility for novices remains uncertain. Other models demonstrate capabilities in engineering enhanced proteins and analysing which candidate pathogens or toxins are most harmful. Experts could potentially use these in developing both more advanced weapons and defensive measures."
4.2 Cyberattacks, weapon development or use, and mass harmOther risks from Bengio2025 (13)
Risks from malicious use > Harm to individuals through fake content
4.3 Fraud, scams, and targeted manipulationRisks from malicious use > Manipulation of public opinion
4.1 Disinformation, surveillance, and influence at scaleRisks from malicious use > Cyber offence
4.2 Cyberattacks, weapon development or use, and mass harmRisks from malicious use > Biological and chemical attacks
4.2 Cyberattacks, weapon development or use, and mass harmReliability issues
7.3 Lack of capability or robustnessBias
1.1 Unfair discrimination and misrepresentation