Commits to establish a shared scientific basis for AI risk assessments. Requires risk assessments to be actionable, transparent, comprehensive, multistakeholder, iterative, and reproducible. Encourages collaboration among stakeholders and adapting methodologies as AI systems evolve.
Analysis summaries, actor details, and coverage mappings were LLM-classified and may contain errors.
This is a non-binding joint statement that establishes voluntary principles and commitments for AI risk assessment. It uses predominantly voluntary language ('should') and relies on collaborative adherence rather than legal enforcement mechanisms.
The document has minimal to good coverage of approximately 8-10 subdomains, with strongest focus on AI system safety failures (7.1, 7.2, 7.3), governance failure (6.5), and competitive dynamics (6.4). The document emphasizes risk assessment methodologies rather than specific risk types, with implicit coverage of malicious actors (4.1, 4.2, 4.3) and multi-agent risks (7.6).
This document does not govern specific economic sectors. It is a cross-sectoral framework for AI risk assessment methodology applicable to advanced AI systems across all sectors. The document references the Sustainable Development Goals, suggesting broad applicability, but does not establish sector-specific governance measures.
The document comprehensively covers the entire AI lifecycle with particular emphasis on iterative risk assessment throughout development, deployment, and post-deployment monitoring. It explicitly addresses planning, development, deployment, and operational monitoring stages, emphasizing that risks should be assessed and mitigated across all stages of the AI lifecycle.
The document explicitly mentions AI systems and multiple types of advanced AI including frontier AI systems, dual-use foundation models, general-purpose AI models, and advanced generative AI systems. It does not mention specific compute thresholds or distinguish between open-weight and closed models. The focus is on advanced AI systems broadly defined.
International Network of AI Safety Institutes
The document is explicitly authored and proposed by the International Network of AI Safety Institutes, which is a multinational network of government AI safety institutes committed to establishing shared scientific basis for AI risk assessments.
The document does not specify any enforcement body or mechanisms. It is a voluntary commitment statement with no binding enforcement provisions. Individual network members retain flexibility to implement according to their own frameworks.
International Network of AI Safety Institutes (collective monitoring through collaboration), independent third-party evaluators
The document suggests collaborative monitoring through the Network's joint activities and mentions independent third-party evaluators for reproducibility. However, no formal monitoring body is designated.
Network members (AI Safety Institutes), organizations conducting AI risk assessments, developers of advanced AI systems
The document targets network members who will conduct risk assessments, as well as organizations developing and deploying advanced AI systems. The guidance applies to those assessing risks of frontier AI systems, dual-use foundation models, general-purpose AI models, and advanced generative AI systems.
12 subdomains (4 Good, 8 Minimal)