This page is still being polished. If you have thoughts, please share them via the feedback form.
Data on this page is preliminary and may change. Please do not share or cite these figures publicly.
Implementation standards, guidelines, and documented best practices for AI development.
Also in Shared Infrastructure
Stage: Detection; Stakeholder: National Government: AISI; Additional information: Governments, with AI developers and other stakeholders, should establish a clear, shared definition of AI LOC and a set of criteria for detection. AI models can exhibit emergent capabilities and follow unpredictable trajectories, making it difficult to define LOC uniformly across deployment conditions. A task force or working group led by AISIs, in collaboration with AI developers and researchers, could seek to create a comprehensive but flexible definition of LOC.
Reasoning
Establishing shared criteria for AI LOC produces adoptable technical standards for ecosystem-wide evaluation and measurement.
Monitor critical capability levels
2.2.2 Testing & EvaluationIdentify early warning signs and emergent capabilities
2.2.1 Risk AssessmentEstablish standardised benchmarks and reporting
3.2.1 Benchmarks & EvaluationImplement compute monitoring and anomaly detection
1.2.3 Monitoring & DetectionEnhance hardware and supply chain oversight
2.3.3 Monitoring & LoggingCoordinate evaluations and safety testing
2.2.2 Testing & EvaluationStrengthening Emergency Preparedness and Response for AI Loss of Control Incidents
Somani, Elika; Friedman, Anjay; Wu, Henry; Lu, Marianne; Byrd, Christopher; van Soest, Henri; Zakaria, Sana (2025)
As artificial intelligence (AI) systems become increasingly embedded in essential infrastructure and services, the risks associated with unintended failures rise. Developing comprehensive emergency response protocols could help mitigate these significant risks. This report focuses on understanding and addressing AI loss of control (LOC) scenarios where human oversight fails to adequately constrain an autonomous, general-purpose AI.
Plan and Design
Designing the AI system, defining requirements, and planning development
Governance Actor
Regulator, standards body, or oversight entity shaping AI policy
Govern
Policies, processes, and accountability structures for AI risk management