This page is still being polished. If you have thoughts, please share them via the feedback form.
Data on this page is preliminary and may change. Please do not share or cite these figures publicly.
Runtime monitoring, observability, performance tracking, and anomaly detection in production.
Also in Operations & Security
3.1 AI development should be designed to prioritise fairness. This would involve addressing algorithms and data bias from an early stage with a view to ensuring fairness and non-discrimination. 3.2. Organisations that develop, deploy or use AI systems should remain vigilant to the dangers posed by bias. This could be achieved by establishing ethics boards and codes of conduct, and by adopting industry-wide standards and internationally recognised quality seals. 3.4 AI systems with an important social impact could require independent reviewing and testing on a periodic basis. 3.3. In the development and monitoring of AI systems, particular attention should be paid to disadvantaged groups which may be incorrectly represented in the training data.
Reasoning
Spans multiple organizational L2 categories: governance structures (ethics boards, codes of conduct), risk assessment (independent testing), and design standards (fairness-by-design).
Ethical Purpose and Societal Benefit
Organisations that develop, deploy or use AI systems and any national laws that regulate such use should require the purposes of such implementation to be identified and ensure that such purposes are consistent with the overall ethical purposes of beneficence and non-maleficence, as well as the other principles of the Policy Framework for Responsible AI.
3.2.2 Technical StandardsEthical Purpose and Societal Benefit > Overarching principles
2.1.3 Policies & ProceduresEthical Purpose and Societal Benefit > Work and automation
2.2.1 Risk AssessmentEthical Purpose and Societal Benefit > Environmental impact
2.2.1 Risk AssessmentEthical Purpose and Societal Benefit > Weaponised AI
3.1.3 International AgreementsEthical Purpose and Societal Benefit > The weaponisation of false or misleading information
1.2.1 Guardrails & FilteringOther (multiple stages)
Applies across multiple lifecycle stages
Developer
Entity that creates, trains, or modifies the AI system
Manage
Prioritising, responding to, and mitigating AI risks