Describes OpenAI's progress on realizing the company's voluntary commitments to promote safety, security, and trust in AI, initially made in July 2023. Details OpenAI's approach to mitigating frontier risks, which are the focus of the UK AI Safety Summit, and the development of a Preparedness Framework.
Analysis summaries, actor details, and coverage mappings were LLM-classified and may contain errors.
This is an internal corporate policy document describing OpenAI's voluntary commitments and preparedness framework for frontier AI risk management. It lacks binding legal obligations, enforcement mechanisms, or penalties.
The document has good coverage of approximately 10-12 subdomains, with strong focus on malicious actors (4.1, 4.2, 4.3), AI system security (2.2), dangerous capabilities (7.2), goal misalignment (7.1), lack of robustness (7.3), competitive dynamics (6.4), and governance failure (6.5). Coverage is concentrated in security, misuse prevention, and AI safety domains.
This is an internal corporate policy document from OpenAI, an AI development company. The sectors governed are those in which OpenAI operates: Information (AI/software development) and Scientific Research and Development Services (AI research). The document does not regulate external sectors but describes OpenAI's own governance of its frontier AI development activities.
The document covers multiple AI lifecycle stages with primary focus on Build and Use Model, Verify and Validate, Deploy, and Operate and Monitor. It emphasizes evaluation, testing, deployment decisions, and post-deployment monitoring of frontier AI models.
The document explicitly focuses on frontier AI models and systems, with detailed discussion of model capabilities, evaluations, and risks. It does not use the terms 'general purpose AI' or 'task-specific AI' but focuses on increasingly capable frontier models. No specific compute thresholds are mentioned. The document addresses model weights and their protection but does not explicitly discuss open-weight or open-source models.
OpenAI
OpenAI is the author and proposer of this preparedness framework, describing their own voluntary commitments and internal governance structures for frontier AI development.
OpenAI Preparedness team; Superalignment team; Safety Systems team; Deployment Safety Board (OpenAI-Microsoft joint)
Internal OpenAI teams and the joint Deployment Safety Board with Microsoft are responsible for implementing and enforcing the framework's provisions.
OpenAI Preparedness team; Alignment Research Center (ARC); external red-teamers; Frontier Model Forum
OpenAI's internal Preparedness team conducts ongoing monitoring, supplemented by external organizations like ARC for evaluations and the Frontier Model Forum for information sharing.
OpenAI; Microsoft
The framework applies to OpenAI's own frontier model development and deployment processes. Microsoft is mentioned as a technology partner subject to the joint Deployment Safety Board.
12 subdomains (8 Good, 4 Minimal)