This page is still being polished. If you have thoughts, please share them via the feedback form.
Data on this page is preliminary and may change. Please do not share or cite these figures publicly.
Structured analysis to identify, characterize, and prioritize potential harms and risks.
Also in Risk & Assurance
A risk matrix is a method for risk evaluation. It is a heatmap that, for each cell, shows the severity score weighted by the likelihood score of a particular risk, usually from a scale of 1-5. Two rankings are required to construct a risk matrix: a ranking for the severity of risks, and a corresponding ranking of the likelihood of risks [107]. AI-related risks can be generated using appropriate taxonomies, and placed into the relevant cells according to their assessed likelihood and severity based on predefined criteria (e.g., likelihood level 1 corresponds to < 1% chance, and likelihood level 5 corresponds to > 90% chance; while severity level 1 corresponds to mild inconveniences to the user, and severity level 5 corresponds to a fatality or financial damage upwards of $10 million, etc.), such that particular focus can be given to mitigating risks with higher weighted scores (i.e., likelihood multiplied by severity).
Reasoning
Risk matrices systematically identify, characterize, and prioritize potential harms and risks pre-deployment.
Risk Assessment
Model development
2.4 Engineering & DevelopmentModel development > Data-related
1.1 ModelModel evaluations
2.2.2 Testing & EvaluationModel evaluations > General evaluations
2.2.2 Testing & EvaluationModel evaluations > Benchmarking
3.2.1 Benchmarks & EvaluationModel evaluations > Red teaming
2.2.2 Testing & EvaluationRisk Sources and Risk Management Measures in Support of Standards for General-Purpose AI Systems
Gipiškis, Rokas; San Joaquin, Ayrton; Chin, Ze Shen; Regenfuß, Adrian; Gil, Ariel; Holtman, Koen (2024)
Organizations and governments that develop, deploy, use, and govern AI must coordinate on effective risk mitigation. However, the landscape of AI risk mitigation frameworks is fragmented, uses inconsistent terminology, and has gaps in coverage. This paper introduces a preliminary AI Risk Mitigation Taxonomy to organize AI risk mitigations and provide a common frame of reference. The Taxonomy was developed through a rapid evidence scan of 13 AI risk mitigation frameworks published between 2023-2025, which were extracted into a living database of 831 distinct AI risk mitigations.
Plan and Design
Designing the AI system, defining requirements, and planning development
Deployer
Entity that integrates and deploys the AI system for end users
Measure
Quantifying, testing, and monitoring identified AI risks
Primary
6.5 Governance failure