This page is still being polished. If you have thoughts, please share them via the feedback form.
Data on this page is preliminary and may change. Please do not share or cite these figures publicly.
Safety culture, knowledge dissemination, and talent development within the organization.
Also in Engineering & Development
The set of norms, attitudes and behaviors related to awareness, management and controls of risks
The major decisions on risk will be made by senior management, advised by risk experts. However, risk is created, influenced and observed by people who are several levels below senior management. This creates the need for a focus on risk culture (known in some industries as safety culture). This is the "set of norms, attitudes and behaviors related to awareness, management and controls of risks" (European Central Bank, 2023). These shape day-to-day decisions impacting risks. Another key element of culture is speak-up culture, “a workplace environment where employees feel comfortable speaking their minds, sharing their ideas, and raising concerns without fear of negative consequences” (West, nd). In aviation, this is often called a “just culture”, a culture without retaliation for speaking up and reporting problems (Parker, 2014). A key feature of a speak-up culture is whistleblowing - processes for anonymously reporting issues. This is important at AI developers, where new risks and safety issues might be discovered serendipitously. All aspects of culture are ultimately driven by the “tone at the top”. This can be defined as “top management’s way to express [...] values pursued in the organization and provide guidance to employees” (Ewelt-Knauer et al., 2020). It refers to the communications made by senior leadership on risk and safety and is a core cultural element influencing how the whole organization will take decisions that impact risks. Finally, incentives to perform and report positive information at each level of management can filter out negative information and cause senior management to receive only a small fraction of the information relevant to risk decision-making. A poor organization-wide culture can lead the leadership to systematically underestimate risk.
Reasoning
Establishes safety culture and speak-up norms shaping organization-wide risk-aware behavior and decisions.
Risk Analysis and Evaluation
Risk analysis and evaluation is a process that starts with the definition of a risk tolerance. This risk tolerance is then operationalized into risk indicators and their corresponding mitigations required to reduce risk below the risk tolerance.
2.2.1 Risk AssessmentRisk Analysis and Evaluation > Setting a Risk Tolerance
A risk tolerance represents the aggregate level of risk that society is willing to accept from AI systems.
3 EcosystemRisk Analysis and Evaluation > Operationalizing Risk Tolerance
Risk tolerance must be operationalized into measurable criteria to be practically useful in day-to-day operations. A risk tolerance can be translated into (1) Key Risk Indicator (KRI) thresholds, which are thresholds on measurable signals that serve as proxies for risks, and (2) Key Control Indicator (KCI) thresholds, which are thresholds on measurable signals that serve as proxies for the level of mitigation achieved.
2.2.1 Risk AssessmentRisk Treatment
Risk treatment corresponds to the process of determining, implementing, and evaluating appropriate risk-reducing countermeasures
2.2 Risk & AssuranceRisk Treatment > Implementing Mitigation Measures
AI developers should operationalize their KCI thresholds into mitigation measures.
2.3 Operations & SecurityRisk Treatment > Continuous Monitoring and Comparing Results with Pre-determined Thresholds
Developers must therefore implement continuous monitoring of both KRIs and KCIs to ensure that KCI thresholds are met once KRI thresholds are crossed according to the predefined "if-then" statements established in the risk analysis and evaluation phase.
2.3.3 Monitoring & LoggingA Frontier AI Risk Management Framework: Bridging the Gap Between Current AI Practices and Established Risk Management
Campos, Simeon; Papadatos, Henry; Roger, Fabien; Touzet, Chloé; Quarks, Otter; Murray, Malcolm (2025)
The recent development of powerful AI systems has highlighted the need for robust risk management frameworks in the AI industry. Although companies have begun to implement safety frameworks, current approaches often lack the systematic rigor found in other high-risk industries. This paper presents a comprehensive risk management framework for the development of frontier AI that bridges this gap by integrating established risk management principles with emerging AI-specific practices. The framework consists of four key components: (1) risk identification (through literature review, open-ended red-teaming, and risk modeling), (2) risk analysis and evaluation using quantitative metrics and clearly defined thresholds, (3) risk treatment through mitigation measures such as containment, deployment controls, and assurance processes, and (4) risk governance establishing clear organizational structures and accountability. Drawing from best practices in mature industries such as aviation or nuclear power, while accounting for AI's unique challenges, this framework provides AI developers with actionable guidelines for implementing robust risk management. The paper details how each component should be implemented throughout the life-cycle of the AI system - from planning through deployment - and emphasizes the importance and feasibility of conducting risk management work prior to the final training run to minimize the burden associated with it.
Other (outside lifecycle)
Outside the standard AI system lifecycle
Developer
Entity that creates, trains, or modifies the AI system
Govern
Policies, processes, and accountability structures for AI risk management
Primary
6.5 Governance failure