This page is still being polished. If you have thoughts, please share them via the feedback form.
Data on this page is preliminary and may change. Please do not share or cite these figures publicly.
Design-time architectural choices affecting safety, interpretability, and modularity.
Also in Model
4.1 Organisations that develop AI systems should ensure that the system logic and architecture serves to facilitate transparency and explainability requirements. In so far as is reasonably practicable, and taking into account the state of the art at the time, such systems should aim to be designed from the most fundamental level upwards to promote transparency and explainability by design. Where there is a choice between system architectures which are less or more opaque, the more transparent option should be preferred.4.2 Users of AI systems and persons subject to their decisions must have an effective way to seek remedy in the event that organisations that develop, deploy or use AI systems are not transparent about their use.
Reasoning
Design-time architectural choices prioritize transparent, explainable system structures over opaque alternatives.
Ethical Purpose and Societal Benefit
Organisations that develop, deploy or use AI systems and any national laws that regulate such use should require the purposes of such implementation to be identified and ensure that such purposes are consistent with the overall ethical purposes of beneficence and non-maleficence, as well as the other principles of the Policy Framework for Responsible AI.
3.2.2 Technical StandardsEthical Purpose and Societal Benefit > Overarching principles
2.1.3 Policies & ProceduresEthical Purpose and Societal Benefit > Work and automation
2.2.1 Risk AssessmentEthical Purpose and Societal Benefit > Environmental impact
2.2.1 Risk AssessmentEthical Purpose and Societal Benefit > Weaponised AI
3.1.3 International AgreementsEthical Purpose and Societal Benefit > The weaponisation of false or misleading information
1.2.1 Guardrails & FilteringOther (general)
General mitigation not specific to a single lifecycle stage
Developer
Entity that creates, trains, or modifies the AI system
Map
Identifying and documenting AI risks, contexts, and impacts