Establishes Microsoft's Responsible AI Standard, requiring AI systems to undergo ongoing evaluations and impact assessments, oversight for adverse impacts, and fit-for-purpose validation, mandating clear documentation on capabilities and limitations, human oversight, and compliance with privacy, security, and accessibility standards.
Analysis summaries, actor details, and coverage mappings were LLM-classified and may contain errors.
This is an internal corporate policy document establishing Microsoft's own responsible AI standards for its product development teams. It uses mandatory language ('must', 'required') but enforcement is internal to Microsoft rather than through external legal mechanisms.
The document has good coverage of approximately 10-12 subdomains, with strong focus on AI system safety and reliability (7.3, 7.4), human-computer interaction (5.1, 5.2), fairness and discrimination (1.1, 1.3), privacy and security (2.1, 2.2), transparency and misinformation risks (3.1), and governance structures (6.5). Coverage is concentrated in technical safety, fairness, transparency, and operational governance domains.
This is an internal corporate policy that governs Microsoft's AI development activities. As an AI technology company, Microsoft primarily operates in the Information sector (software, cloud services, AI platforms) and Scientific Research and Development Services. The document does not regulate external sectors but rather establishes standards for Microsoft's own AI systems across all its business operations.
The document comprehensively covers all stages of the AI lifecycle, with particularly strong emphasis on planning/design (Impact Assessments, requirements definition), verification/validation (extensive evaluation requirements), deployment (release criteria, documentation), and operation/monitoring (ongoing evaluation, feedback mechanisms). Data collection and model building are addressed through data governance and model definition requirements.
The document uses the term 'AI systems' throughout and defines requirements that apply to 'All AI systems.' It references models as components of AI systems but does not explicitly distinguish between frontier AI, general purpose AI, task-specific AI, foundation models, or generative vs predictive AI. No compute thresholds or open-weight model distinctions are mentioned.
Microsoft; Office of Responsible AI
Microsoft developed this internal standard through a multi-year effort involving research, policy, and engineering teams. The document is authored by Microsoft for its own AI development operations.
Office of Responsible AI; reviewers identified in Impact Assessment
The Office of Responsible AI is the primary enforcement body, with authority to review Sensitive Uses, receive escalations, and make decisions on how to proceed when criteria cannot be met.
Office of Responsible AI; system owners; developers; customer support
Monitoring is conducted through multiple channels including the Office of Responsible AI for oversight, and system owners/developers for ongoing system health monitoring and evaluation.
Microsoft AI system development teams; Microsoft product development teams
The standard applies to Microsoft's own AI systems and product development teams. All requirements are directed at internal Microsoft teams developing AI systems.
11 subdomains (7 Good, 4 Minimal)