This page is still being polished. If you have thoughts, please share them via the feedback form.
Data on this page is preliminary and may change. Please do not share or cite these figures publicly.
Structured analysis to identify, characterize, and prioritize potential harms and risks.
Also in Risk & Assurance
Balance investments between different risks and opportunities to improve overall risk
Reasoning
Establishes strategic framework for prioritizing and allocating organizational investments across different risks.
Accelerate adaptation
Accelerate adaptation to the adversary by e.g. analyzing past moves of an adversary and developing new tactics
2.2.1 Risk AssessmentAdjust planning horizon
Lengthen or shorten the planning horizon to match the ability to forecast
2.1.3 Policies & ProceduresAmass resources
Make the arsenal of resources as large as possible
2.3.2 Access & Security ControlsAnomaly detection and investigation
seek out anomalous readings and trigger alerts
1.2.3 Monitoring & DetectionAsymmetric offsets
Invest in capability that can neutralize an opponent's possible strength
2.4.1 Research & FoundationsAutomatic containment system
A physical system that automatically responds to impacting events by containing the damage
1.2.2 Runtime EnvironmentFrom nuclear safety to LLM security: Applying non-probabilistic risk management strategies to build safe and secure LLM-powered systems
Gutfraind, Alexander; Bier, Vicki (2025)
Large language models (LLMs) offer unprecedented and growing capabilities, but also introduce complex safety and security challenges that resist conventional risk management. While conventional probabilistic risk analysis (PRA) requires exhaustive risk enumeration and quantification, the novelty and complexity of these systems make PRA impractical, particularly against adaptive adversaries. Previous research found that risk management in various fields of engineering such as nuclear or civil engineering is often solved by generic (i.e. field-agnostic) strategies such as event tree analysis or robust designs. Here we show how emerging risks in LLM-powered systems could be met with 100+ of these non-probabilistic strategies to risk management, including risks from adaptive adversaries. The strategies are divided into five categories and are mapped to LLM security (and AI safety more broadly). We also present an LLM-powered workflow for applying these strategies and other workflows suitable for solution architects. Overall, these strategies could contribute (despite some limitations) to security, safety and other dimensions of responsible AI.
Other (outside lifecycle)
Outside the standard AI system lifecycle
Developer
Entity that creates, trains, or modifies the AI system
Manage
Prioritising, responding to, and mitigating AI risks