This page is still being polished. If you have thoughts, please share them via the feedback form.
Data on this page is preliminary and may change. Please do not share or cite these figures publicly.
Cross-organization coordination mechanisms, information sharing, and collaborative monitoring.
Also in Voluntary & Cooperative
Stage: Escalation; Stakeholder: Compute Providers; Additional information: AI developers, AISIs and relevant government departments should enhance cross-sector and international coordination, including clear communication lines, information-sharing agreements and predefined escalation pathways (see Annex A). These could include secure emergency hotlines between AI developers and national AI safety regulators, classified communication channels, and sector-specific CERTs for AI incidents. AISIs could act as central and secure information hubs, consolidate national data and facilitate trusted exchanges with international counterparts. Global emergency response exercises, potentially through multilateral forums, could improve preparedness and refine coordination protocols. International agreements could provide additional mechanisms for addressing AI risks that affect health and cyber domains (WHO 2025).
Reasoning
Compute providers coordinate with AI developers and national authorities to manage escalation—cross-organization information sharing without state enforcement.
Monitor critical capability levels
2.2.2 Testing & EvaluationIdentify early warning signs and emergent capabilities
2.2.1 Risk AssessmentEstablish standardised benchmarks and reporting
3.2.1 Benchmarks & EvaluationImplement compute monitoring and anomaly detection
1.2.3 Monitoring & DetectionEnhance hardware and supply chain oversight
2.3.3 Monitoring & LoggingLead efforts to establish shared criteria for AI LOC
3.2.2 Technical StandardsStrengthening Emergency Preparedness and Response for AI Loss of Control Incidents
Somani, Elika; Friedman, Anjay; Wu, Henry; Lu, Marianne; Byrd, Christopher; van Soest, Henri; Zakaria, Sana (2025)
As artificial intelligence (AI) systems become increasingly embedded in essential infrastructure and services, the risks associated with unintended failures rise. Developing comprehensive emergency response protocols could help mitigate these significant risks. This report focuses on understanding and addressing AI loss of control (LOC) scenarios where human oversight fails to adequately constrain an autonomous, general-purpose AI.
Operate and Monitor
Running, maintaining, and monitoring the AI system post-deployment
Infrastructure Provider
Entity providing compute, platforms, or tooling for AI systems
Manage
Prioritising, responding to, and mitigating AI risks