This page is still being polished. If you have thoughts, please share them via the feedback form.
Data on this page is preliminary and may change. Please do not share or cite these figures publicly.
Structured analysis to identify, characterize, and prioritize potential harms and risks.
Also in Risk & Assurance
Causal mapping is a technique used to explore and map complex interactions between cause and effect of risks. It involves coming up with potential events related to an undesirable issue, with each event represented by a text box, then clustering similar events according to themes, and finally drawing arrows to illustrate the causal relationship between the different events. The completed causal map can then be analyzed to identify central events, clusters of events, feedback loops, and other relevant patterns [107]
For example, causal mapping can be used to explore factors that lead to highlevel model capabilities (e.g., “machine intelligence”), and the nodes may include factors such as “concept formation” and “flexible memory,” where certain nodes may be found to be especially crucial if they have more outgoing arrows connecting them to other nodes
Reasoning
Insufficient information: "Causal mapping" lacks definition and evidence to identify focal activity or implementation location.
Risk Assessment
Model development
2.4 Engineering & DevelopmentModel development > Data-related
1.1 ModelModel evaluations
2.2.2 Testing & EvaluationModel evaluations > General evaluations
2.2.2 Testing & EvaluationModel evaluations > Benchmarking
3.2.1 Benchmarks & EvaluationModel evaluations > Red teaming
2.2.2 Testing & EvaluationRisk Sources and Risk Management Measures in Support of Standards for General-Purpose AI Systems
Gipiškis, Rokas; San Joaquin, Ayrton; Chin, Ze Shen; Regenfuß, Adrian; Gil, Ariel; Holtman, Koen (2024)
Organizations and governments that develop, deploy, use, and govern AI must coordinate on effective risk mitigation. However, the landscape of AI risk mitigation frameworks is fragmented, uses inconsistent terminology, and has gaps in coverage. This paper introduces a preliminary AI Risk Mitigation Taxonomy to organize AI risk mitigations and provide a common frame of reference. The Taxonomy was developed through a rapid evidence scan of 13 AI risk mitigation frameworks published between 2023-2025, which were extracted into a living database of 831 distinct AI risk mitigations.
Plan and Design
Designing the AI system, defining requirements, and planning development
Developer
Entity that creates, trains, or modifies the AI system
Map
Identifying and documenting AI risks, contexts, and impacts
Primary
7 AI System Safety, Failures & Limitations