Commits Microsoft to responsible AI development and deployment with multi-layered governance, collaboration with OpenAI, and robust risk management through AI red teaming and evaluations. Establishes a joint Deployment Safety Board for model review, transparency, and security controls.
Analysis summaries, actor details, and coverage mappings were LLM-classified and may contain errors.
This is an internal corporate policy document describing Microsoft's voluntary commitments and responsible AI practices. It references voluntary commitments made at the White House convening and describes internal governance structures, standards, and processes that Microsoft has implemented.
The document has good coverage of approximately 10-12 subdomains, with strong focus on AI system security (2.2), malicious actors and misuse (4.1, 4.2, 4.3), competitive dynamics (6.4), governance structures (6.5), and AI safety failures (7.1, 7.2, 7.3, 7.4). Coverage is concentrated in security, misuse prevention, AI safety, and governance domains, with minimal coverage of discrimination, privacy, misinformation, and socioeconomic impacts.
This is an internal corporate policy document from Microsoft, an AI developer and technology company. The sectors governed are primarily Information (where Microsoft operates as a technology and AI company) and Scientific Research and Development Services (through Microsoft Research's AI research activities). The document does not regulate external sectors but describes Microsoft's internal governance of AI development and deployment across its own operations.
The document comprehensively covers all stages of the AI lifecycle, with particularly strong emphasis on Build and Use Model, Verify and Validate, Deploy, and Operate and Monitor stages. It describes detailed processes for model development, evaluation, testing, deployment review, and ongoing monitoring.
The document explicitly covers AI models, AI systems, and frontier AI with detailed discussion. It mentions foundation models and generative AI systems extensively. The document does not explicitly define compute thresholds or discuss general purpose AI, task-specific AI, predictive AI, or open-weight models in detail, though it references model capabilities broadly.
Microsoft; OpenAI
Microsoft is the primary proposer of this governance framework, describing its own responsible AI policies, standards, and practices. The document is Microsoft's response to the UK Government's AI Safety Policies Request, outlining voluntary commitments and internal governance structures.
Responsible AI Council; Microsoft-OpenAI Deployment Safety Board (DSB); Office of Responsible AI; Microsoft Security Response Center (MSRC); Digital Security & Resilience team
Microsoft's internal governance bodies enforce the responsible AI policies, including the Responsible AI Council co-chaired by Brad Smith and Kevin Scott, the joint Deployment Safety Board with OpenAI, and various security and compliance teams.
Office of Responsible AI; AI Red Team; Microsoft Security Response Center (MSRC); Responsible AI Division Leads and Champs; Microsoft Threat Intelligence
Multiple internal teams monitor compliance and implementation, including the Office of Responsible AI, the AI Red Team for security testing, and various monitoring and threat intelligence teams that track ongoing performance and risks.
Microsoft product teams; Microsoft engineering teams; OpenAI
The governance framework applies to Microsoft's internal product teams, engineering teams, and development processes. It also applies to the Microsoft-OpenAI collaboration on frontier model development and deployment.
14 subdomains (11 Good, 3 Minimal)