This page is still being polished. If you have thoughts, please share them via the feedback form.
Data on this page is preliminary and may change. Please do not share or cite these figures publicly.
Structured analysis to identify, characterize, and prioritize potential harms and risks.
Also in Risk & Assurance
Identify Social Development Goals (SDGs) also known as Global Goals adopted by the United Nations [35], societal benefits/social goals, or sustainability goals where the AI system can create impact on. Ensure regular screening so that the system’s social impact aligns with ethical standards and long-term social benefits, and that the environmental issues are mitigated through sustainable practices during system operations.
Reasoning
Identifies and regularly screens social/environmental impacts to assess alignment with ethical standards and sustainability goals.
Scope and Governance of Ethics Assessment defined
2.1.3 Policies & ProceduresScope and Governance of Ethics Assessment defined > Define governance
Assemble an Ethics Board, with roles, responsibilities, system objectives, and accountability structures defined within the first few months of project initiation.
2.1.2 Roles & AccountabilityScope and Governance of Ethics Assessment defined > Identify potential incidents
Analyze current literature to identify potential incidents, and related proposed mitigation measures for the specific use case.
2.2.1 Risk AssessmentScope and Governance of Ethics Assessment defined > Define review process:
Establish regular review mechanism for ethics clearance by process of multi-stakeholder collaboration including ethics advisors.
2.1.3 Policies & ProceduresScope and Governance of Ethics Assessment defined > Define scope
Select AI system features that must undergo ethical screening by the Ethics Board
2.1.3 Policies & Procedures“Data Fairness defined“
2.2 Risk & AssuranceTowards Trusted AI: A Blueprint for Ethics Assessment in Practice
Wirth, Christoph Tobias; Maftei, Mihai; Martín-Peña, Rosa Esther; Merget, Iris (2025)
The development of AI technologies leaves place for unforeseen ethical challenges. Issues such as bias, lack of transparency and data privacy must be addressed during the design, development, and the deployment stages throughout the lifecycle of AI systems to mitigate their impact on users. Consequently, ensuring that such systems are responsibly built has become a priority for researchers and developers from both public and private sector. As a proposed solution, this paper presents a blueprint for AI ethics assessment. The blueprint provides for AI use cases an adaptable approach which is agnostic to ethics guidelines, regulatory environments, business models, and industry sectors. The blueprint offers an outcomes library of key performance indicators (KPIs) which are guided by a mapping of ethics framework measures to processes and phases defined by the blueprint. The main objectives of the blueprint are to provide an operationalizable process for the responsible development of ethical AI systems, and to enhance public trust needed for broad adoption of trusted AI solutions. In an initial pilot the blueprinted for AI ethics assessment is applied to a use case of generative AI in education. ¬© Christoph Tobias Wirth, Mihai Maftei, Rosa Esther Martín-Peña, and Iris Merget.
Operate and Monitor
Running, maintaining, and monitoring the AI system post-deployment
Deployer
Entity that integrates and deploys the AI system for end users
Map
Identifying and documenting AI risks, contexts, and impacts