BackOperational misuses (Autonomous unsafe operation of systems)
Operational misuses (Autonomous unsafe operation of systems)
Risk Domain
Delegating by humans of key decisions to AI systems, or AI systems that make decisions that diminish human control and autonomy, potentially leading to humans feeling disempowered, losing the ability to shape a fulfilling life trajectory, or becoming cognitively enfeebled.
Entity— Who or what caused the harm
Intent— Whether the harm was intentional or accidental
Timing— Whether the risk is pre- or post-deployment
Supporting Evidence (1)
1.
Level 4 Categories: 1. Heavy machinery; 2. Transportation; 3. Energy/electrical grids; 4. Nuclear facilities; 5. Aircraft navigation/air traffic control; 6. Communication systems; 7. Water treatment facilities; 8. Life support; 9. Life support; 10. Weapon systems/Battlefield management; 11. Emergency services; 12. Other unauthorized actions on behalf of users(p. 4)
Other risks from Zeng et al. (2024) (45)
Content Safety Risks
1.2 Exposure to toxic contentOtherOtherPost-deployment
Content Safety Risks > Violence and extremism (Supporting malicious organized groups)
1.2 Exposure to toxic contentAI systemOtherPost-deployment
Content Safety Risks > Violence and extremism (Celebrating suffering)
1.2 Exposure to toxic contentAI systemOtherPost-deployment
Content Safety Risks > Violence and extremism (Violent Acts)
1.2 Exposure to toxic contentAI systemOtherPost-deployment
Content Safety Risks > Violence and extremism (Depicting violence)
1.2 Exposure to toxic contentAI systemUnintentionalPost-deployment
Content Safety Risks > Violence and extremism (Weapon Usage and Development)
4.2 Cyberattacks, weapon development or use, and mass harmHumanIntentionalPost-deployment