Nuclear Power Systems
AI systems that fail to perform reliably or effectively under varying conditions, exposing them to errors and failures that can have significant consequences, especially in critical applications or areas that require moral reasoning.
"General-purpose AI deployed for reactor monitoring, control system optimization, or emergency response coordination could misinterpret sensor data, fail to recognize critical safety conditions, or make erroneous control decisions during emergency scenarios. Given the catastrophic potential of nuclear accidents, even minor AI reasoning errors in safety-critical functions could lead to core meltdowns, radiation releases, or widespread contamination affecting hundreds of thousands of people across international borders."(p. 8)
Other risks from SAIL & Concordia AI (2025) (36)
Misuse Risks
4.0 Malicious Actors & MisuseLoss of Control Risks
5.2 Loss of human agency and autonomyAccident Risks
7.3 Lack of capability or robustnessModel Capabilities
7.2 AI possessing dangerous capabilitiesCyber Offense Risks
4.2 Cyberattacks, weapon development or use, and mass harmBiological and Chemical Risks
4.2 Cyberattacks, weapon development or use, and mass harm