This page is still being polished. If you have thoughts, please share them via the feedback form.
Data on this page is preliminary and may change. Please do not share or cite these figures publicly.
Safety culture, knowledge dissemination, and talent development within the organization.
Also in Engineering & Development
Reasoning
Insufficient information to identify focal activity. "Literacy of AI system defined" lacks description, evidence, and clarity on whether it concerns technical mechanisms, organizational practices, or ecosystem coordination.
Guide safe and responsible use
Develop operationalization guidelines for the AI system. Implement AI literacy and ethics awareness program to en-sure that all stakeholders understand how to use and interact with the AI system considering ethics, and system limitations. Ensure that users of a collaborative AI system understand the embodied ethics under normal operation and ethical boundaries. Assess regularly (by user feedback or questionnaires) stakeholders’ ability to exercise human oversight over the AI system. Train developers and system users in recognizing and mitigating ethical risks
2.4.4 Training & AwarenessEvaluate human centeredness
Analyze by regular post-deployment reviews that human-AI interaction and human oversight remain effective. These reviews will track how effectively the system supports AI-assisted decision-making
2.3.3 Monitoring & LoggingEstablish AI training for professional development:
Ensure regular updates of the AI literacy and sustainability training. Adjust training programs based on explaining observed versus expected outcomes, on system improvements, user and stakeholder feedback, and on advancement in state-of-art and energy-efficient technologies. Ensure that employees acquire sufficient knowledge in developing, improving, deploying or using the AI system throughout the entire life cycle.
2.4.4 Training & AwarenessMeasure impact on social goals
Identify Social Development Goals (SDGs) also known as Global Goals adopted by the United Nations [35], societal benefits/social goals, or sustainability goals where the AI system can create impact on. Ensure regular screening so that the system’s social impact aligns with ethical standards and long-term social benefits, and that the environmental issues are mitigated through sustainable practices during system operations.
2.2.1 Risk AssessmentMeasure energy consumption
Measure energy consumption and related costs during training and inference stages. Identify options to minimize the system’s carbon footprint, for example by choosing a smaller (foundational) AI model or by effective finetuning. Compare effects of model hosting on premise, on cloud and on edge (device).
2.2.1 Risk AssessmentScope and Governance of Ethics Assessment defined
2.1.3 Policies & ProceduresScope and Governance of Ethics Assessment defined > Define governance
Assemble an Ethics Board, with roles, responsibilities, system objectives, and accountability structures defined within the first few months of project initiation.
2.1.2 Roles & AccountabilityScope and Governance of Ethics Assessment defined > Identify potential incidents
Analyze current literature to identify potential incidents, and related proposed mitigation measures for the specific use case.
2.2.1 Risk AssessmentScope and Governance of Ethics Assessment defined > Define review process:
Establish regular review mechanism for ethics clearance by process of multi-stakeholder collaboration including ethics advisors.
2.1.3 Policies & ProceduresScope and Governance of Ethics Assessment defined > Define scope
Select AI system features that must undergo ethical screening by the Ethics Board
2.1.3 Policies & Procedures“Data Fairness defined“
2.2 Risk & AssuranceTowards Trusted AI: A Blueprint for Ethics Assessment in Practice
Wirth, Christoph Tobias; Maftei, Mihai; Martín-Peña, Rosa Esther; Merget, Iris (2025)
The development of AI technologies leaves place for unforeseen ethical challenges. Issues such as bias, lack of transparency and data privacy must be addressed during the design, development, and the deployment stages throughout the lifecycle of AI systems to mitigate their impact on users. Consequently, ensuring that such systems are responsibly built has become a priority for researchers and developers from both public and private sector. As a proposed solution, this paper presents a blueprint for AI ethics assessment. The blueprint provides for AI use cases an adaptable approach which is agnostic to ethics guidelines, regulatory environments, business models, and industry sectors. The blueprint offers an outcomes library of key performance indicators (KPIs) which are guided by a mapping of ethics framework measures to processes and phases defined by the blueprint. The main objectives of the blueprint are to provide an operationalizable process for the responsible development of ethical AI systems, and to enhance public trust needed for broad adoption of trusted AI solutions. In an initial pilot the blueprinted for AI ethics assessment is applied to a use case of generative AI in education. ¬© Christoph Tobias Wirth, Mihai Maftei, Rosa Esther Martín-Peña, and Iris Merget.
Deploy
Releasing the AI system into a production environment
Deployer
Entity that integrates and deploys the AI system for end users
Govern
Policies, processes, and accountability structures for AI risk management