This page is still being polished. If you have thoughts, please share them via the feedback form.
Data on this page is preliminary and may change. Please do not share or cite these figures publicly.
Structured analysis to identify, characterize, and prioritize potential harms and risks.
Also in Risk & Assurance
The fishbone diagram, or a “cause-and-effect diagram” [107], can be used to show the potential causes of an undesirable event. The diagram is created by first placing a specific risk event at the “head” of the diagram, typically facing the right. Then, to the left of the risk event, the “ribs” branch off from the “backbone” to represent major causes, which further branch into sub-branches to represent root-causes, extending to as many levels as required. This is typically done via backward-reasoning, where various potential causes are explored after the risk event has been selected for analysis using this method.
For example, the risk event “AI systems generate toxic content” can be placed at the “head” of the diagram, where the branches may include causal factors like “AI trained on data containing toxicity” and “successful jailbreaking of AI despite fine-tuning.”
Reasoning
Insufficient information: "Fishbone diagram" lacks detail on focal activity, mechanism, and implementation context.
Risk Assessment
Model development
2.4 Engineering & DevelopmentModel development > Data-related
1.1 ModelModel evaluations
2.2.2 Testing & EvaluationModel evaluations > General evaluations
2.2.2 Testing & EvaluationModel evaluations > Benchmarking
3.2.1 Benchmarks & EvaluationModel evaluations > Red teaming
2.2.2 Testing & EvaluationRisk Sources and Risk Management Measures in Support of Standards for General-Purpose AI Systems
Gipiškis, Rokas; San Joaquin, Ayrton; Chin, Ze Shen; Regenfuß, Adrian; Gil, Ariel; Holtman, Koen (2024)
Organizations and governments that develop, deploy, use, and govern AI must coordinate on effective risk mitigation. However, the landscape of AI risk mitigation frameworks is fragmented, uses inconsistent terminology, and has gaps in coverage. This paper introduces a preliminary AI Risk Mitigation Taxonomy to organize AI risk mitigations and provide a common frame of reference. The Taxonomy was developed through a rapid evidence scan of 13 AI risk mitigation frameworks published between 2023-2025, which were extracted into a living database of 831 distinct AI risk mitigations.
Plan and Design
Designing the AI system, defining requirements, and planning development
Other (multiple actors)
Applies across multiple actor types
Map
Identifying and documenting AI risks, contexts, and impacts