Dual use science risks
Using AI systems to develop cyber weapons (e.g., by coding cheaper, more effective malware), develop new or enhance existing weapons (e.g., Lethal Autonomous Weapons or chemical, biological, radiological, nuclear, and high-yield explosives), or use weapons to cause mass harm.
"General- purpose AI systems could accelerate advances in a range of scientific endeavours, from training new scientists to enabling faster research workflows. While these capabilities could have numerous beneficial applications, some experts have expressed concern that they could be used for malicious purposes, especially if further capabilities are developed soon before appropriate countermeasures are put in place. There are two avenues by which general- purpose AI systems could, speculatively, facilitate malicious use in the life sciences: firstly by providing increased access to information and expertise relevant to malicious use, and secondly by increasing the ceiling of capabilities, which may enable the development of more harmful versions of existing threats or, eventually, lead to novel threats (404, 405)."(p. 45)
Part of Malicious Use Risks
Other risks from Bengio et al. (2024) (14)
Malicious Use Risks
4.0 Malicious Actors & MisuseMalicious Use Risks > Harm to individuals through fake content
4.3 Fraud, scams, and targeted manipulationMalicious Use Risks > Disinformation and manipulation of public opinion
4.1 Disinformation, surveillance, and influence at scaleMalicious Use Risks > Cyber offence
4.2 Cyberattacks, weapon development or use, and mass harmRisks from Malfunctions
7.0 AI System Safety, Failures & LimitationsRisks from Malfunctions > Risks from product functionality issues
5.1 Overreliance and unsafe use