Malicious Use
"AI systems reducing the costs and facilitating activities of actors trying to cause harm (e.g. fraud, weapons)"(p. 14)
Sub-categories (4)
Influence operations
"Facilitating large-scale disinformation campaigns and targeted manipulation of public opinion"
4.1 Disinformation, surveillance, and influence at scaleFraud
"Facilitating fraud, cheating, forgery, and impersonation scams"
4.3 Fraud, scams, and targeted manipulationDefamation
"Facilitating slander, defamation, or false accusations"
4.1 Disinformation, surveillance, and influence at scaleSecurity threats
"Facilitating the conduct of cyber attacks, weapon development, and security breaches"
4.2 Cyberattacks, weapon development or use, and mass harmOther risks from Weidinger et al. (2023) (26)
Representation & Toxicity Harms
1.0 Discrimination & ToxicityRepresentation & Toxicity Harms > Unfair representation
1.1 Unfair discrimination and misrepresentationRepresentation & Toxicity Harms > Unfair capability distribution
1.3 Unequal performance across groupsRepresentation & Toxicity Harms > Toxic content
1.2 Exposure to toxic contentMisinformation Harms
3.0 MisinformationMisinformation Harms > Propagating misconceptions/ false beliefs
3.1 False or misleading information