This page is still being polished. If you have thoughts, please share them via the feedback form.
Data on this page is preliminary and may change. Please do not share or cite these figures publicly.
Governance frameworks, formal policies, and strategic alignment mechanisms.
Also in Oversight & Accountability
1.1 Organisations that develop, deploy or use AI systems should do so in a manner compatible with human agency and the respect for fundamental human rights (including freedom from discrimination). 1.2 Organisations that develop, deploy or use AI systems should monitor the implementation of such AI systems and act to mitigate against consequences of such AI systems (whether intended or unintended) that are inconsistent with the ethical purposes of beneficence and non-maleficence, as well as the other principles of the Policy Framework for Responsible AI set out in this framework. 1.3 Organisations that develop, deploy or use AI systems should assess the social, political and environmental implications of such development, deployment and use in the context of a structured Responsible AI Impact Assessment that assesses risk of harm and, as the case may be, proposes mitigation strategies in relation to such risks
Reasoning
Abstract principles without specific mechanisms or implementation details; insufficient to identify focal activity.
Ethical Purpose and Societal Benefit
Organisations that develop, deploy or use AI systems and any national laws that regulate such use should require the purposes of such implementation to be identified and ensure that such purposes are consistent with the overall ethical purposes of beneficence and non-maleficence, as well as the other principles of the Policy Framework for Responsible AI.
3.2.2 Technical StandardsEthical Purpose and Societal Benefit > Work and automation
2.2.1 Risk AssessmentEthical Purpose and Societal Benefit > Environmental impact
2.2.1 Risk AssessmentEthical Purpose and Societal Benefit > Weaponised AI
3.1.3 International AgreementsEthical Purpose and Societal Benefit > The weaponisation of false or misleading information
1.2.1 Guardrails & FilteringAccountability
Organisations that develop, deploy or use AI systems and any national laws that regulate such use shall respect and adopt the eight principles of this Policy Framework for Responsible AI (or other analogous accountability principles). In all instances, humans should remain accountable for the acts and omissions of AI systems.
3.2.2 Technical StandardsOther (general)
General mitigation not specific to a single lifecycle stage
Developer
Entity that creates, trains, or modifies the AI system
Govern
Policies, processes, and accountability structures for AI risk management