This page is still being polished. If you have thoughts, please share them via the feedback form.
Data on this page is preliminary and may change. Please do not share or cite these figures publicly.
Structured analysis to identify, characterize, and prioritize potential harms and risks.
Also in Risk & Assurance
Identifying, managing, and monitoring potential risks and impacts caused by the AI system to align the system with the organization’s strategic goals and values.
Literature streams: Algorithmic impact assessment (Kaminski & Malgieri, 2020; Metcalf et al., 2021)
Reasoning
Identifies and analyzes potential risks and impacts to align system with organizational goals—core risk assessment practice.
AI system harms and impacts pre-assessment
Understanding what harms and societal impacts an AI system may create is a crucial precondition for sustainable AI system development. The intensity of potential harms and impacts will vary significantly across different AI systems. Industrial AI systems with no direct effects on individuals or the environment will likely create limited harms and impacts. An AI system that makes irreversible decisions that affect the rights and obligations of individuals will have profound impacts and may generate significant harms. The AI System Owner should ensure that the organization conducts a harms and impact preassessment at the outset of AI system development. The pre-assessment outcomes should be documented. The pre-assessment should cover a wide range of potential harm and impact creation pathways. The pre-assessment should consider the harms the AI system may create and the impact it may have on its users, possible decision-making subjects, other affected parties, society at large, and the environment. AI system risk assessments often focus on the direct harms the systems may create. The harms and impacts pre-assessment should, however, aim at also identifying the potential system-level harms and impacts. These include the social action affordances the system may create or modify, its potential wealth and power distribution implications and effects on equality. The ethical advisory function should be involved in the pre-assessment of the harms and impacts if it is likely that the AI system will create a non-negligible risk of harm to individuals or the environment. The AI System Owner should ensure that harms and impacts pre-assessment is conducted if the design parameters of the AI system undergo fundamental changes.
2.2.1 Risk AssessmentAlgorithm risk assessment
Algorithms constitute the backbones of AI systems. AI system performance driven by algorithm performance. Possible AI systems biases and unfair outcomes often emanate from algorithm design. If the organization has access to the algorithms in the AI systems, identifying the possible algorithm risks and assessing their gravity is key to sustainable AI system development and operation. Algorithm risk assessment should cover, to the extent possible, a wide range of algorithmrelated risk sources and causes. The Algorithm Owner should at least ensure that the organization 1) explores and documents how the algorithm affects the operations of the entire AI system 2) explores and documents the possible risk of biased and, in particular, discriminatory outcomes, and 3) explores and documents the risk of unfair outcomes and harms the algorithm may generate. As identifying biases and unfairness is often complex and contentious, the reviews should involve ethical and legal experts. Particularly if the organization intends to use the algorithm in a high-risk use case. In machine learning algorithms, testing algorithm outputs may be necessary for identifying biases and discriminatory outcomes. In addition, if a machine learning algorithm incorporates inferences made from training data, the risk assessment should review and assess 4) the risk of detecting non-existing patterns and correlations in the data, 5) the level of algorithmic scrutability and explainability.
2.2.1 Risk AssessmentAI system health, safety, and fundamental rights impact assessment
AI system impacts on the health and safety of humans will likely remain the most important concerns that organizations should address when developing and using AI. These impacts together with fundamental rights impacts will likely be the centerpieces of future regulatory instruments. If the AI system harms and impacts pre-assessment (T40) indicates that the AI systems will likely have non-negligible impacts on the health, safety, and fundamental rights of individuals, The AI System Owner should ensure that the organization undertakes and documents 6) a health impact review to identify the potential health impacts the AI system may have on the physical and psychological well-being of its users, subjects, and other affected parties, 7) a safety impact review to the identify the potential safety risks the AI system may impose on individuals' and organizations' tangible assets, and 8) a fundamental rights impact to identify the potential impacts that the AI system may have on the protection and realization of individuals' fundamental rights. The legal advisory function should be involved in the assessment.
2.2.1 Risk AssessmentAI system non-discrimination assurance
Many jurisdictions have non-discrimination laws and impose equal treatment requirements. Non-compliance with the non-discrimination laws and equal treatment requirements is incompatible with sustainable AI operations and may create significant legal and reputational risks. The AI System Owner should ensure that the organization conducts and documents a nondiscrimination assurance process to ensure that the AI system outputs are compliant with non-discrimination laws and equal treatment requirements. The legal advisory function should be involved in both designing and conducting the assurance. Ensuring that an AI system creates no discrimination risk is challenging due to the nature of the non-discrimination and equal treatment. For example, under the Finnish Equality Act, an AI system would directly discriminate against a person if the system threated a person less favorably than other based on their age, nationality, language, religion, belief, opinion, political activity, trade union activity, family relationships, state of health, disability, sexual orientation, or other personal characteristics. Less favorable treatment is discrimination even if based on an apparently neutral rule. Despite the prima facie ban, differential treatment can be justified if mandated by law or the treatment has an acceptable objective in terms of basic and human rights, and the measures to attain the aim are proportionate. Conducting a diligent non-discrimination assurance is particularly important for AI systems with algorithms developed using machine learning approaches. Machine learning approaches may result in inadvertent discrimination. As the algorithms are often unexplainable, detecting discriminatory bias may require the use of post-hoc analysis tools and real-world data AI system output testing.
2.2.3 Auditing & ComplianceAI system impact mitigation
Minimizing the AI system impacts is an important phase in sustainable AI system development and deployment. Minimizing the impacts requires first that the potential impacts are analyzed and appropriate measures are taken to eliminate or reduce adverse impacts where possible. Second, minimization requires that the organization mitigates the effects of the adverse impacts that it cannot eliminate or, third, manages their consequences. To arrive at an acceptable AI system impact, the AI System Owner should ensure that the organization 1) conducts a thorough analysis of potential impacts the system may have on its users, subjects or affected parties, or the environment, 2) develops and implements a risk minimization plan. The risk minimization plan should be designed to guarantee that the AI systems are acceptable and aligned with the organization's values and risk tolerance. The risk minimization plan should outline 1) appropriate measures to eliminate adverse impacts to the extent possible, 2) appropriate measures to reduce adverse impacts that cannot be eliminated, 3) appropriate measures to mitigate the effects of the residual adverse impacts, and 4) appropriate measures to manage the adverse impacts that cannot be mitigated.
2.2.1 Risk AssessmentAI system impact metrics design
Acceptable AI system impacts performance can only be ensured by deploying appropriate metrics to measure them. The AI System Owner should ensure that the organization defines and documents metrics for monitoring the AI system impacts during its operational use. The AI System Owner should ensure that the impact metrics align with the organization's values and risk tolerance.
2.3.3 Monitoring & LoggingAI system impact monitoring design
Monitoring AI impact is crucial to ensuring that its impacts remain acceptable. The monitoring must be systematic and metrics-based to achieve consistency over time. The AI System Owner should ensure that the organization defines, documents, and entrenches 5) workflows and technical interfaces to facilitate the monitoring of AI system impact, including for example 6) automated or manual production and reporting of impact metrics data, 7) alarm thresholds, and 8) workflows that allocate monitoring responsibilities. 9) workflows to address issues detected during health checks. The AI System Owner should ensure that the AI system performance monitoring process aligns with the organization's values and risk tolerance.
2.3.3 Monitoring & LoggingAI system impact monitoring
The AI System Owner should ensure that the organization implements the planned AI system impact monitoring processes. If the system version control processes disclose a breach of impact standards or indicate a value or risk tolerance misalignment, the AI System Owner should initiate appropriate measures to address the breach or regain alignment.
2.3.3 Monitoring & LoggingAI system impact health check
The AI System Owner should ensure that the organization performs the regular planned impact health checks. The reviews should assess whether the AI system impacts align with the organization's values and risk tolerance. If a review discloses a misalignment, the AI System Owner should initiate appropriate measures to regain alignment.
2.3.3 Monitoring & LoggingAI System
Ensuring that the AI system is developed, operated, and monitored in alignment with the organization’s strategic goals and values.
2.1.3 Policies & ProceduresAI System > AI system repository and AI ID
Coordinated AI development, operation, and use are essential to organizations' sustainable AI operations. All organizations using AI systems should operate an AI system repository. The repository should 1) identify all AI systems the organization is developing, operates, uses, or has retired, 2) assign them a unique identifier, 3) contain the relevant documents the organization has produced or received on the AI system.
2.1.3 Policies & ProceduresAI System > AI system pre-design
Once an organization initiates an AI system development project, it should perform a preliminary pre-design of the system. The Head of AI (T54) should ensure that the organization 1) enters the AI system into the AI repository (T1), 2) assesses whether the AI system can align with the organization's values and risk tolerance, 3) initiates the development processes and assigns roles and responsibilities, 4) identifies and documents the planned AI system's key features and design constraints.
2.1.3 Policies & ProceduresAI System > AI system use case
Identifying and understanding the intended use case of an AI system and its other possible uses is key to sustainable AI development and use. The use case affects the system's regulatory environment and may have significant reputational risk implications. The AI System Owner (T55) should ensure that the organization defines and documents 1) the intended use case of the AI system and 2) the possible other uses of the AI system. The AI System Owner should ensure that the use case definition aligns with the organization's values and risk tolerance. The AI System Owner should ensure that the organization takes adequate measures to prevent inappropriate AI system misuse.
2.1.3 Policies & ProceduresAI System > AI system user
People in organizations use AI systems. Some AI systems make decisions that directly or indirectly affect humans and their rights and obligations (affected persons). Sustainable AI system development and use require that the organization is conscious of who is using the AI system and whose rights and obligations it may affect. The organization should define and document 1) the intended AI system user organizations and human users, 2) the intended affected persons, and 3) possible other users and affected persons. The AI System Owner (T55) should ensure that the user definitions align with the organization's values and risk tolerance.
2.1.3 Policies & ProceduresAI System > AI system operating environment
AI systems are embedded in the business and organizational environment. This environment typically consists of technological and social elements. The operating environment is a key driver of AI system impacts. The organization should define and document 1) the intended business or operational model and environment of the AI system, 2) the intended IT environment the AI system is embedded in and interacts with, 3) the other intended AI systems the AI system interacts with.
2.1.3 Policies & ProceduresPutting AI Ethics into Practice: The Hourglass Model of Organizational AI Governance
Mäntymäki, Matti; Minkkinen, Matti; Birkstedt, Teemu; Viljanen, Mika (2022)
The organizational use of artificial intelligence (AI) has rapidly spread across various sectors. Alongside the awareness of the benefits brought by AI, there is a growing consensus on the necessity of tackling the risks and potential harms, such as bias and discrimination, brought about by advanced AI technologies. A multitude of AI ethics principles have been proposed to tackle these risks, but the outlines of organizational processes and practices for ensuring socially responsible AI development are in a nascent state. To address the paucity of comprehensive governance models, we present an AI governance framework, the hourglass model of organizational AI governance, which targets organizations that develop and use AI systems. The framework is designed to help organizations deploying AI systems translate ethical AI principles into practice and align their AI systems and processes with the forthcoming European AI Act. The hourglass framework includes governance requirements at the environmental, organizational, and AI system levels. At the AI system level, we connect governance requirements to AI system life cycles to ensure governance throughout the system's life span. The governance model highlights the systemic nature of AI governance and opens new research avenues into its practical implementation, the mechanisms that connect different AI governance layers, and the dynamics between the AI governance actors. The model also offers a starting point for organizational decision-makers to consider the governance components needed to ensure social acceptability, mitigate risks, and realize the potential of AI.
Other (multiple stages)
Applies across multiple lifecycle stages
Deployer
Entity that integrates and deploys the AI system for end users
Unable to classify
Could not be classified to a specific AIRM function
Primary
6.5 Governance failure