This page is still being polished. If you have thoughts, please share them via the feedback form.
Data on this page is preliminary and may change. Please do not share or cite these figures publicly.
Cross-organization coordination mechanisms, information sharing, and collaborative monitoring.
Also in Voluntary & Cooperative
Stage: Escalation; Stakeholder: National Government: AISI; Additional information: Government stakeholders should consider mandatory reporting mechanisms for AI risks and potential incidents. Governments should consider mandating legal disclosure requirements covering key risk scenarios, including model theft (e.g. stolen weights, unauthorised access), deceptive model behaviour (e.g. models manipulating evaluations to appear weaker) and emergent risky capabilities (e.g. escape or uncontrolled replication of models, and extreme capability breakthroughs). Government actors should clarify how cyber incident reporting mechanisms can be applied to AI-related incidents. Independent safety evaluators, third-party auditors and compute providers could have the authority to report high-risk developments to oversight bodies.
Reasoning
Establishes communication channels between government and AI developers/compute providers for cross-stakeholder coordination without state enforcement.
Monitor critical capability levels
2.2.2 Testing & EvaluationIdentify early warning signs and emergent capabilities
2.2.1 Risk AssessmentEstablish standardised benchmarks and reporting
3.2.1 Benchmarks & EvaluationImplement compute monitoring and anomaly detection
1.2.3 Monitoring & DetectionEnhance hardware and supply chain oversight
2.3.3 Monitoring & LoggingLead efforts to establish shared criteria for AI LOC
3.2.2 Technical StandardsStrengthening Emergency Preparedness and Response for AI Loss of Control Incidents
Somani, Elika; Friedman, Anjay; Wu, Henry; Lu, Marianne; Byrd, Christopher; van Soest, Henri; Zakaria, Sana (2025)
As artificial intelligence (AI) systems become increasingly embedded in essential infrastructure and services, the risks associated with unintended failures rise. Developing comprehensive emergency response protocols could help mitigate these significant risks. This report focuses on understanding and addressing AI loss of control (LOC) scenarios where human oversight fails to adequately constrain an autonomous, general-purpose AI.
Operate and Monitor
Running, maintaining, and monitoring the AI system post-deployment
Governance Actor
Regulator, standards body, or oversight entity shaping AI policy
Govern
Policies, processes, and accountability structures for AI risk management