This page is still being polished. If you have thoughts, please share them via the feedback form.
Data on this page is preliminary and may change. Please do not share or cite these figures publicly.
Implementation standards, guidelines, and documented best practices for AI development.
Also in Shared Infrastructure
Organisations that develop, deploy or use AI systems and any national laws that regulate such use should require the purposes of such implementation to be identified and ensure that such purposes are consistent with the overall ethical purposes of beneficence and non-maleficence, as well as the other principles of the Policy Framework for Responsible AI.
Reasoning
Establishes formal organizational policy requiring identified AI purposes to align with ethical principles and governance framework.
Overarching principles
2.1.3 Policies & ProceduresWork and automation
2.2.1 Risk AssessmentEnvironmental impact
2.2.1 Risk AssessmentWeaponised AI
3.1.3 International AgreementsThe weaponisation of false or misleading information
1.2.1 Guardrails & FilteringAccountability
Organisations that develop, deploy or use AI systems and any national laws that regulate such use shall respect and adopt the eight principles of this Policy Framework for Responsible AI (or other analogous accountability principles). In all instances, humans should remain accountable for the acts and omissions of AI systems.
3.2.2 Technical StandardsAccountability > Accountability
2.1.2 Roles & AccountabilityAccountability > Government
3.1.1 Legislation & PolicyAccountability > Contextual approach
3.1.1 Legislation & PolicyTransparency and Explainability
Organisations that develop, deploy or use AI systems and any national laws that regulate such use shall ensure that, to the extent reasonable given the circumstances and state of the art of the technology, such use is transparent and that the decision outcomes of the AI system are explainable.
3.1.1 Legislation & PolicyTransparency and Explainability > Transparency and explainability by design
1.1.4 Model ArchitecturePlan and Design
Designing the AI system, defining requirements, and planning development
Governance Actor
Regulator, standards body, or oversight entity shaping AI policy
Govern
Policies, processes, and accountability structures for AI risk management