This page is still being polished. If you have thoughts, please share them via the feedback form.
Data on this page is preliminary and may change. Please do not share or cite these figures publicly.
Structured analysis to identify, characterize, and prioritize potential harms and risks.
Also in Risk & Assurance
Reasoning
Insufficient information to identify focal activity or mechanism; cryptic name lacks description or evidence.
Assess transparency
Document performance and uncertainty of the AI model with respect to fairness objectives. Justify personal attributes in the data that are used for fairness assessment
2.2.4 Assurance DocumentationAssess explainability
Conduct explainability assessment of the AI system to ensure that the system’s potential decision-influencing processes are clear and understandable to stakeholders of the system and to users and addresses of the system’s outcome. Explainability assessments are of paramount importance for decision-assist systems. Here the focus is on detecting decision boundaries and deriving concrete recommendations for actions in gray area situations or for high-stakes decisions
2.2.2 Testing & EvaluationAssess safety and security:
Conduct a safety and security evaluation of the AI system to ensure all identified risks to fairness and operational safety are documented and classified by materiality prior to deployment.
2.2.1 Risk AssessmentIdentify AI materiality
Document all risks associated with the AI model’s materiality and categorize all impacts with respect to severity and likelihood. Identify mitigation strategies for each risk
2.2.1 Risk AssessmentUpdate materiality classification:
Define process for post-hoc assessment or audit of the AI model’s materiality classification. Flag any newly ob-served risks and resolve in continuous system improvement initiatives.
2.2.3 Auditing & ComplianceScope and Governance of Ethics Assessment defined
2.1.3 Policies & ProceduresScope and Governance of Ethics Assessment defined > Define governance
Assemble an Ethics Board, with roles, responsibilities, system objectives, and accountability structures defined within the first few months of project initiation.
2.1.2 Roles & AccountabilityScope and Governance of Ethics Assessment defined > Identify potential incidents
Analyze current literature to identify potential incidents, and related proposed mitigation measures for the specific use case.
2.2.1 Risk AssessmentScope and Governance of Ethics Assessment defined > Define review process:
Establish regular review mechanism for ethics clearance by process of multi-stakeholder collaboration including ethics advisors.
2.1.3 Policies & ProceduresScope and Governance of Ethics Assessment defined > Define scope
Select AI system features that must undergo ethical screening by the Ethics Board
2.1.3 Policies & Procedures“Data Fairness defined“
2.2 Risk & AssuranceTowards Trusted AI: A Blueprint for Ethics Assessment in Practice
Wirth, Christoph Tobias; Maftei, Mihai; Martín-Peña, Rosa Esther; Merget, Iris (2025)
The development of AI technologies leaves place for unforeseen ethical challenges. Issues such as bias, lack of transparency and data privacy must be addressed during the design, development, and the deployment stages throughout the lifecycle of AI systems to mitigate their impact on users. Consequently, ensuring that such systems are responsibly built has become a priority for researchers and developers from both public and private sector. As a proposed solution, this paper presents a blueprint for AI ethics assessment. The blueprint provides for AI use cases an adaptable approach which is agnostic to ethics guidelines, regulatory environments, business models, and industry sectors. The blueprint offers an outcomes library of key performance indicators (KPIs) which are guided by a mapping of ethics framework measures to processes and phases defined by the blueprint. The main objectives of the blueprint are to provide an operationalizable process for the responsible development of ethical AI systems, and to enhance public trust needed for broad adoption of trusted AI solutions. In an initial pilot the blueprinted for AI ethics assessment is applied to a use case of generative AI in education. ¬© Christoph Tobias Wirth, Mihai Maftei, Rosa Esther Martín-Peña, and Iris Merget.
Verify and Validate
Testing, evaluating, auditing, and red-teaming the AI system
Developer
Entity that creates, trains, or modifies the AI system
Measure
Quantifying, testing, and monitoring identified AI risks
Primary
7 AI System Safety, Failures & Limitations