This page is still being polished. If you have thoughts, please share them via the feedback form.
Data on this page is preliminary and may change. Please do not share or cite these figures publicly.
Mandated reporting, disclosure obligations, and registration requirements imposed by law.
Also in Legal & Regulatory
External communication of risks and decision-making
Without transparency, the public and regulators would have no way of judging whether the company adequately manages risk. The first type of communication is external disclosure of the risks faced by the organization. In the United States, this is required for listed companies and is provided in the annual report (White & Case LLP, 2023). In the case of AI, given the broad nature of risks the technology poses, this should be broadened to include risks to society from the company’s products. It is also important that the organization discloses details on their governance structure to provide transparency on how risk is managed. In other industries, annual reports must disclose elements of governance, both in the United States (U.S. Securities and Exchange Commission, 2024) and in other jurisdictions (UN Trade and Development, 2006). In the case of AI, with rapidly changing risks and significant potential for hidden errors, it is also vital that organizations provide external incident reporting. This type of reporting can be addressed to industry bodies, such as the Frontier Model Forum (Frontier Model Forum, 2023), or to regulators.
Reasoning
Establishes formal disclosure policies for external communication of risks and governance structure.
Risk Analysis and Evaluation
Risk analysis and evaluation is a process that starts with the definition of a risk tolerance. This risk tolerance is then operationalized into risk indicators and their corresponding mitigations required to reduce risk below the risk tolerance.
2.2.1 Risk AssessmentRisk Analysis and Evaluation > Setting a Risk Tolerance
A risk tolerance represents the aggregate level of risk that society is willing to accept from AI systems.
3 EcosystemRisk Analysis and Evaluation > Operationalizing Risk Tolerance
Risk tolerance must be operationalized into measurable criteria to be practically useful in day-to-day operations. A risk tolerance can be translated into (1) Key Risk Indicator (KRI) thresholds, which are thresholds on measurable signals that serve as proxies for risks, and (2) Key Control Indicator (KCI) thresholds, which are thresholds on measurable signals that serve as proxies for the level of mitigation achieved.
2.2.1 Risk AssessmentRisk Treatment
Risk treatment corresponds to the process of determining, implementing, and evaluating appropriate risk-reducing countermeasures
2.2 Risk & AssuranceRisk Treatment > Implementing Mitigation Measures
AI developers should operationalize their KCI thresholds into mitigation measures.
2.3 Operations & SecurityRisk Treatment > Continuous Monitoring and Comparing Results with Pre-determined Thresholds
Developers must therefore implement continuous monitoring of both KRIs and KCIs to ensure that KCI thresholds are met once KRI thresholds are crossed according to the predefined "if-then" statements established in the risk analysis and evaluation phase.
2.3.3 Monitoring & LoggingA Frontier AI Risk Management Framework: Bridging the Gap Between Current AI Practices and Established Risk Management
Campos, Simeon; Papadatos, Henry; Roger, Fabien; Touzet, Chloé; Quarks, Otter; Murray, Malcolm (2025)
The recent development of powerful AI systems has highlighted the need for robust risk management frameworks in the AI industry. Although companies have begun to implement safety frameworks, current approaches often lack the systematic rigor found in other high-risk industries. This paper presents a comprehensive risk management framework for the development of frontier AI that bridges this gap by integrating established risk management principles with emerging AI-specific practices. The framework consists of four key components: (1) risk identification (through literature review, open-ended red-teaming, and risk modeling), (2) risk analysis and evaluation using quantitative metrics and clearly defined thresholds, (3) risk treatment through mitigation measures such as containment, deployment controls, and assurance processes, and (4) risk governance establishing clear organizational structures and accountability. Drawing from best practices in mature industries such as aviation or nuclear power, while accounting for AI's unique challenges, this framework provides AI developers with actionable guidelines for implementing robust risk management. The paper details how each component should be implemented throughout the life-cycle of the AI system - from planning through deployment - and emphasizes the importance and feasibility of conducting risk management work prior to the final training run to minimize the burden associated with it.
Other (outside lifecycle)
Outside the standard AI system lifecycle
Developer
Entity that creates, trains, or modifies the AI system
Manage
Prioritising, responding to, and mitigating AI risks