BackPolitical usage (Disrupting Social Order)
Political usage (Disrupting Social Order)
Risk Domain
Using AI systems to conduct large-scale disinformation campaigns, malicious surveillance, or targeted and sophisticated automated censorship and propaganda, with the aim of manipulating political processes, public opinion, and behavior.
Entity— Who or what caused the harm
Intent— Whether the harm was intentional or accidental
Timing— Whether the risk is pre- or post-deployment
Supporting Evidence (1)
1.
Level 4 Categories: 1. Opposing constitutional principles; 2. Subverting state power; 3. Undermining national unity; 4. Damaging state interests; 5. Damaging the state’s honor; 6. Inciting unlawful assemblies; 7. Inciting unlawful associations; 8. Inciting unlawful processions; 9. Inciting unlawful demonstrations; 10. Undermining religious policies; 11. Promoting cults; 12. Promoting feudal superstitions(p. 4)
Other risks from Zeng et al. (2024) (45)
Content Safety Risks
1.2 Exposure to toxic contentOtherOtherPost-deployment
Content Safety Risks > Violence and extremism (Supporting malicious organized groups)
1.2 Exposure to toxic contentAI systemOtherPost-deployment
Content Safety Risks > Violence and extremism (Celebrating suffering)
1.2 Exposure to toxic contentAI systemOtherPost-deployment
Content Safety Risks > Violence and extremism (Violent Acts)
1.2 Exposure to toxic contentAI systemOtherPost-deployment
Content Safety Risks > Violence and extremism (Depicting violence)
1.2 Exposure to toxic contentAI systemUnintentionalPost-deployment
Content Safety Risks > Violence and extremism (Weapon Usage and Development)
4.2 Cyberattacks, weapon development or use, and mass harmHumanIntentionalPost-deployment