BackAssisting code generation for cyber attacks, weapons, or malicious use
Home/Risks/Weidinger et al. (2021)/Assisting code generation for cyber attacks, weapons, or malicious use
Home/Risks/Weidinger et al. (2021)/Assisting code generation for cyber attacks, weapons, or malicious use
Assisting code generation for cyber attacks, weapons, or malicious use
Risk Domain
Using AI systems to develop cyber weapons (e.g., by coding cheaper, more effective malware), develop new or enhance existing weapons (e.g., Lethal Autonomous Weapons or chemical, biological, radiological, nuclear, and high-yield explosives), or use weapons to cause mass harm.
Entity— Who or what caused the harm
Intent— Whether the harm was intentional or accidental
Timing— Whether the risk is pre- or post-deployment
Part of Malicious Uses
Other risks from Weidinger et al. (2021) (26)
Discrimination, Exclusion and Toxicity
1.0 Discrimination & ToxicityAI systemUnintentionalPost-deployment
Discrimination, Exclusion and Toxicity > Social stereotypes and unfair discrmination
1.1 Unfair discrimination and misrepresentationAI systemUnintentionalOther
Discrimination, Exclusion and Toxicity > Exclusionary norms
1.1 Unfair discrimination and misrepresentationAI systemUnintentionalOther
Discrimination, Exclusion and Toxicity > Toxic language
1.2 Exposure to toxic contentAI systemUnintentionalPost-deployment
Discrimination, Exclusion and Toxicity > Lower performance for some languages and social groups
1.3 Unequal performance across groupsAI systemUnintentionalPost-deployment
Information Hazards
2.1 Compromise of privacy by leaking or correctly inferring sensitive informationAI systemUnintentionalPost-deployment