Promotes safe, secure AI through voluntary guidance for developing advanced AI systems. Emphasizes risk assessment, transparency, security controls, and responsible AI governance. Encourages research, international standards, and societal benefits. Prohibits harmful AI applications undermining democratic values or human rights.
Analysis summaries, actor details, and coverage mappings were LLM-classified and may contain errors.
This is a voluntary international code of conduct with no binding legal obligations or enforcement mechanisms. The document explicitly states it provides 'voluntary guidance' and relies on organizational commitment rather than legal compulsion.
The document has good coverage of approximately 15-17 subdomains, with strong focus on malicious actors (4.1, 4.2, 4.3), AI system security (2.2), privacy compromise (2.1), competitive dynamics (6.4), governance failure (6.5), and AI safety failures (7.1, 7.2, 7.3, 7.4). Coverage is concentrated in security, misuse prevention, AI safety, and governance domains, with minimal coverage of discrimination/toxicity and human-computer interaction risks.
This document does not govern AI use within specific economic sectors. Rather, it governs organizations that develop advanced AI systems (AI developers), which primarily operate in the Information and Scientific Research and Development Services sectors. The document is technology-developer focused, not sector-application focused.
The document explicitly covers all stages of the AI lifecycle, with particular emphasis on development, testing, deployment, and post-deployment monitoring. It repeatedly references 'across the AI lifecycle' and 'throughout the AI lifecycle' as a core principle.
The document explicitly focuses on 'advanced AI systems, including the most advanced foundation models and generative AI systems'. It does not use compute thresholds or explicitly distinguish between general purpose and task-specific AI. It does not explicitly mention open-weight models, though it addresses securing model weights.
Hiroshima Process participants (G7 nations and partners), OECD, GPAI (Global Partnership on AI)
The document is produced by the Hiroshima Process, building on OECD AI Principles, with commitment to develop proposals in consultation with OECD and GPAI. The Hiroshima Process is a G7-led international initiative.
None specified - voluntary compliance framework
The document does not designate any enforcement body. It is a voluntary code of conduct with no formal enforcement mechanisms. Organizations are expected to self-regulate through internal governance structures.
OECD, GPAI, and other stakeholders (monitoring mechanisms to be developed); organizations themselves through self-assessment
The document commits to developing monitoring tools and mechanisms in consultation with OECD and GPAI. Organizations are encouraged to implement self-assessment mechanisms. However, formal monitoring bodies are not yet established.
Organizations developing advanced AI systems, including foundation models and generative AI systems; entities from academia, civil society, private sector, and public sector
The document explicitly targets organizations developing the most advanced AI systems, including foundation models and generative AI. It states that endorsers may include entities from academia, civil society, private sector, and public sector.
18 subdomains (9 Good, 9 Minimal)