This page is still being polished. If you have thoughts, please share them via the feedback form.
Data on this page is preliminary and may change. Please do not share or cite these figures publicly.
Structured analysis to identify, characterize, and prioritize potential harms and risks.
Also in Risk & Assurance
Bow-tie analysis is a method to assess the utility of implemented controls against a particular risk event. It involves centering the unwanted risk event within a diagram. On the left, the factors that can cause the event are listed, followed by the controls that will prevent or minimize the likelihood of the event. On the right, the event is assumed to happen, and the potential effects and the relevant post-hoc controls that could minimize their impact are listed [107].
For example, given a hazard where an AI model has the capability of generating potentially harmful outputs, a risk event may be an AI providing information on how to create dangerous bioweapons. The risk factors could include the use of dangerous data for training as well as a lack of fine-tuning prior to deployment. The risk effects may include creation and use of bioweapons using the AI generated information. Once these risk factors and risk effects are in place, both preventive controls and post-hoc controls can be planned, such as appropriate filtering of training data and rigorous red teaming prior to model deployment as preventive barriers, as well as know-your-customer policies and model output censoring techniques as reactive barriers.
Reasoning
Bow-tie analysis systematically identifies and prioritizes potential harms and risks pre-deployment.
Risk Assessment
Model development
2.4 Engineering & DevelopmentModel development > Data-related
1.1 ModelModel evaluations
2.2.2 Testing & EvaluationModel evaluations > General evaluations
2.2.2 Testing & EvaluationModel evaluations > Benchmarking
3.2.1 Benchmarks & EvaluationModel evaluations > Red teaming
2.2.2 Testing & EvaluationRisk Sources and Risk Management Measures in Support of Standards for General-Purpose AI Systems
Gipiškis, Rokas; San Joaquin, Ayrton; Chin, Ze Shen; Regenfuß, Adrian; Gil, Ariel; Holtman, Koen (2024)
Organizations and governments that develop, deploy, use, and govern AI must coordinate on effective risk mitigation. However, the landscape of AI risk mitigation frameworks is fragmented, uses inconsistent terminology, and has gaps in coverage. This paper introduces a preliminary AI Risk Mitigation Taxonomy to organize AI risk mitigations and provide a common frame of reference. The Taxonomy was developed through a rapid evidence scan of 13 AI risk mitigation frameworks published between 2023-2025, which were extracted into a living database of 831 distinct AI risk mitigations.
Plan and Design
Designing the AI system, defining requirements, and planning development
Developer
Entity that creates, trains, or modifies the AI system
Measure
Quantifying, testing, and monitoring identified AI risks