This page is still being polished. If you have thoughts, please share them via the feedback form.
Data on this page is preliminary and may change. Please do not share or cite these figures publicly.
Implementation standards, guidelines, and documented best practices for AI development.
Also in Shared Infrastructure
Organisations that develop, deploy or use AI systems and any national laws that regulate such use shall adopt design regimes and standards ensuring high safety and reliability of AI systems on one hand while limiting the exposure of developers and deployers on the other hand.
4.1 AI systems can perpetuate and exacerbate bias, and have a broad social and economic impact in society. Addressing fairness in AI use requires a holistic approach. In particular, it requires: i. the close engagement of technical experts from AI-related fields with statisticians and researchers from the social sciences; and ii. a combined engagement between governments, organisations that develop, deploy or use AI systems and the public at large.4.2 The Fairness and Non-Discrimination Principle is supported by the Transparency and Accountability Principles. Effective fairness in use of AI systems requires the implementation of measures in connection with both these Principles.
Reasoning
Establishes formal governance framework requiring organizations and governments adopt design regimes and standards for AI safety and reliability.
Require and/or define explicit ethical and moral principles underpinning the AI system
2.1.3 Policies & ProceduresStandardisation of behaviour
3.2.2 Technical StandardsEnsuring safety, reliability and trust
2.2.2 Testing & EvaluationFacilitating technological progress at reasonable risks
99.9 OtherEthical Purpose and Societal Benefit
Organisations that develop, deploy or use AI systems and any national laws that regulate such use should require the purposes of such implementation to be identified and ensure that such purposes are consistent with the overall ethical purposes of beneficence and non-maleficence, as well as the other principles of the Policy Framework for Responsible AI.
3.2.2 Technical StandardsEthical Purpose and Societal Benefit > Overarching principles
2.1.3 Policies & ProceduresEthical Purpose and Societal Benefit > Work and automation
2.2.1 Risk AssessmentEthical Purpose and Societal Benefit > Environmental impact
2.2.1 Risk AssessmentEthical Purpose and Societal Benefit > Weaponised AI
3.1.3 International AgreementsEthical Purpose and Societal Benefit > The weaponisation of false or misleading information
1.2.1 Guardrails & FilteringOther (multiple stages)
Applies across multiple lifecycle stages
Developer
Entity that creates, trains, or modifies the AI system
Govern
Policies, processes, and accountability structures for AI risk management