This page is still being polished. If you have thoughts, please share them via the feedback form.
Data on this page is preliminary and may change. Please do not share or cite these figures publicly.
Structured analysis to identify, characterize, and prioritize potential harms and risks.
Also in Risk & Assurance
The Delphi technique is a multi-round forecasting process based on a structured framework on collecting and collating multiple expert judgments. It brings the benefits of anonymous and remote participation which may result in increased likelihood estimation accuracy compared to merely averaging individual estimations or simple group discussions [107]. Given a panel of experts, at each round, the experts are presented with an aggregated summary of the results from the previous round, and are then allowed to update their answers accordingly. The process ends when either a consensus is reached or the responses in later rounds no longer change significantly. This method enables elicitation of expert judgment while utilizing wisdom of the crowd in the process. A potential application of the Delphi technique is to solicit expert judgment on the likelihood of systemic risks from AI development, where crucial variables identified during each round of questionnaire can be further studied for the purpose of risk mitigation.
Reasoning
Delphi technique structures expert consensus process to identify and prioritize AI risks systematically.
Risk Assessment
Model development
2.4 Engineering & DevelopmentModel development > Data-related
1.1 ModelModel evaluations
2.2.2 Testing & EvaluationModel evaluations > General evaluations
2.2.2 Testing & EvaluationModel evaluations > Benchmarking
3.2.1 Benchmarks & EvaluationModel evaluations > Red teaming
2.2.2 Testing & EvaluationRisk Sources and Risk Management Measures in Support of Standards for General-Purpose AI Systems
Gipiškis, Rokas; San Joaquin, Ayrton; Chin, Ze Shen; Regenfuß, Adrian; Gil, Ariel; Holtman, Koen (2024)
Organizations and governments that develop, deploy, use, and govern AI must coordinate on effective risk mitigation. However, the landscape of AI risk mitigation frameworks is fragmented, uses inconsistent terminology, and has gaps in coverage. This paper introduces a preliminary AI Risk Mitigation Taxonomy to organize AI risk mitigations and provide a common frame of reference. The Taxonomy was developed through a rapid evidence scan of 13 AI risk mitigation frameworks published between 2023-2025, which were extracted into a living database of 831 distinct AI risk mitigations.
Plan and Design
Designing the AI system, defining requirements, and planning development
Governance Actor
Regulator, standards body, or oversight entity shaping AI policy
Map
Identifying and documenting AI risks, contexts, and impacts
Primary
6.5 Governance failure