Introduces OpenAI's Preparedness Framework to track, evaluate, forecast, and mitigate catastrophic AI risks. Establishes processes for risk evaluation, unknown risk identification, safety baselines, and cross-functional advisory. Limits model deployment or development based on risk levels. Forms a dedicated Preparedness team and Safety Advisory Group to oversee safety measures and decision-making.
Analysis summaries, actor details, and coverage mappings were LLM-classified and may contain errors.
This is an internal corporate policy document establishing OpenAI's voluntary framework for managing catastrophic AI risks. It contains internal governance structures, safety baselines, and procedural commitments but lacks external enforcement mechanisms or legal penalties.
The document has good coverage of approximately 10-12 subdomains, with strong focus on malicious actors (4.1, 4.2, 4.3), AI system security (2.2), competitive dynamics (6.4), governance failure (6.5), and AI safety failures (7.1, 7.2, 7.3). Coverage is concentrated in security, misuse prevention, and AI safety domains, with minimal coverage of discrimination, privacy, misinformation, and socioeconomic impacts.
This is an internal corporate policy document that governs OpenAI's own operations. As an AI development company, OpenAI operates primarily in the Information sector (AI/ML development, data processing) and Scientific Research and Development Services sector (AI research). The document does not regulate external sectors but rather establishes internal governance for OpenAI's AI development activities.
The document comprehensively covers all AI lifecycle stages with particular emphasis on Build and Use Model, Verify and Validate, Deploy, and Operate and Monitor. It addresses planning through risk category identification, model development through evaluation processes, validation through pre- and post-mitigation testing, deployment restrictions based on risk scores, and ongoing monitoring of deployed models.
The document explicitly focuses on 'frontier models' and 'frontier AI models' throughout, referring to increasingly powerful AI models approaching AGI. It does not use terms like 'general purpose AI', 'task-specific AI', 'foundation models', or 'generative AI' explicitly. The document mentions compute thresholds (>2x effective compute increase) as triggers for evaluation. There is one reference to model weights in the context of open release decisions.
OpenAI
OpenAI is the author and proposer of this Preparedness Framework, as evidenced by the document being titled 'OpenAI Preparedness Framework' and references throughout to 'we' and 'our' referring to OpenAI's processes and commitments.
Safety Advisory Group (SAG); OpenAI Leadership; OpenAI Board of Directors; SAG Chair
The framework establishes internal enforcement through the Safety Advisory Group, OpenAI Leadership, and Board of Directors, with the Board having ultimate oversight and ability to reverse decisions.
Preparedness team; Safety Advisory Group (SAG); Trustworthy AI team; qualified independent third-parties
The Preparedness team is responsible for ongoing monitoring, evaluation, and reporting. Third-party auditors provide independent verification, and the SAG oversees the assessment of the risk landscape.
OpenAI; Preparedness team; Safety Systems team; Security team; Superalignment team; Policy Research team
The framework applies to OpenAI's internal operations, teams, and model development processes. It establishes requirements for various internal teams and governs OpenAI's own AI development and deployment activities.
9 subdomains (8 Good, 1 Minimal)