This page is still being polished. If you have thoughts, please share them via the feedback form.
Data on this page is preliminary and may change. Please do not share or cite these figures publicly.
Structured analysis to identify, characterize, and prioritize potential harms and risks.
Also in Risk & Assurance
Algorithms constitute the backbones of AI systems. AI system performance driven by algorithm performance. Possible AI systems biases and unfair outcomes often emanate from algorithm design. If the organization has access to the algorithms in the AI systems, identifying the possible algorithm risks and assessing their gravity is key to sustainable AI system development and operation. Algorithm risk assessment should cover, to the extent possible, a wide range of algorithmrelated risk sources and causes. The Algorithm Owner should at least ensure that the organization 1) explores and documents how the algorithm affects the operations of the entire AI system 2) explores and documents the possible risk of biased and, in particular, discriminatory outcomes, and 3) explores and documents the risk of unfair outcomes and harms the algorithm may generate. As identifying biases and unfairness is often complex and contentious, the reviews should involve ethical and legal experts. Particularly if the organization intends to use the algorithm in a high-risk use case. In machine learning algorithms, testing algorithm outputs may be necessary for identifying biases and discriminatory outcomes. In addition, if a machine learning algorithm incorporates inferences made from training data, the risk assessment should review and assess 4) the risk of detecting non-existing patterns and correlations in the data, 5) the level of algorithmic scrutability and explainability.
D1. Planning and design
Reasoning
Organization analyzes algorithms to identify, document, and characterize potential biases, discrimination, unfairness, and other harms.
AI System
Ensuring that the AI system is developed, operated, and monitored in alignment with the organization’s strategic goals and values.
2.1.3 Policies & ProceduresAI System > AI system repository and AI ID
Coordinated AI development, operation, and use are essential to organizations' sustainable AI operations. All organizations using AI systems should operate an AI system repository. The repository should 1) identify all AI systems the organization is developing, operates, uses, or has retired, 2) assign them a unique identifier, 3) contain the relevant documents the organization has produced or received on the AI system.
2.1.3 Policies & ProceduresAI System > AI system pre-design
Once an organization initiates an AI system development project, it should perform a preliminary pre-design of the system. The Head of AI (T54) should ensure that the organization 1) enters the AI system into the AI repository (T1), 2) assesses whether the AI system can align with the organization's values and risk tolerance, 3) initiates the development processes and assigns roles and responsibilities, 4) identifies and documents the planned AI system's key features and design constraints.
2.1.3 Policies & ProceduresAI System > AI system use case
Identifying and understanding the intended use case of an AI system and its other possible uses is key to sustainable AI development and use. The use case affects the system's regulatory environment and may have significant reputational risk implications. The AI System Owner (T55) should ensure that the organization defines and documents 1) the intended use case of the AI system and 2) the possible other uses of the AI system. The AI System Owner should ensure that the use case definition aligns with the organization's values and risk tolerance. The AI System Owner should ensure that the organization takes adequate measures to prevent inappropriate AI system misuse.
2.1.3 Policies & ProceduresAI System > AI system user
People in organizations use AI systems. Some AI systems make decisions that directly or indirectly affect humans and their rights and obligations (affected persons). Sustainable AI system development and use require that the organization is conscious of who is using the AI system and whose rights and obligations it may affect. The organization should define and document 1) the intended AI system user organizations and human users, 2) the intended affected persons, and 3) possible other users and affected persons. The AI System Owner (T55) should ensure that the user definitions align with the organization's values and risk tolerance.
2.1.3 Policies & ProceduresAI System > AI system operating environment
AI systems are embedded in the business and organizational environment. This environment typically consists of technological and social elements. The operating environment is a key driver of AI system impacts. The organization should define and document 1) the intended business or operational model and environment of the AI system, 2) the intended IT environment the AI system is embedded in and interacts with, 3) the other intended AI systems the AI system interacts with.
2.1.3 Policies & ProceduresPutting AI Ethics into Practice: The Hourglass Model of Organizational AI Governance
Mäntymäki, Matti; Minkkinen, Matti; Birkstedt, Teemu; Viljanen, Mika (2022)
The organizational use of artificial intelligence (AI) has rapidly spread across various sectors. Alongside the awareness of the benefits brought by AI, there is a growing consensus on the necessity of tackling the risks and potential harms, such as bias and discrimination, brought about by advanced AI technologies. A multitude of AI ethics principles have been proposed to tackle these risks, but the outlines of organizational processes and practices for ensuring socially responsible AI development are in a nascent state. To address the paucity of comprehensive governance models, we present an AI governance framework, the hourglass model of organizational AI governance, which targets organizations that develop and use AI systems. The framework is designed to help organizations deploying AI systems translate ethical AI principles into practice and align their AI systems and processes with the forthcoming European AI Act. The hourglass framework includes governance requirements at the environmental, organizational, and AI system levels. At the AI system level, we connect governance requirements to AI system life cycles to ensure governance throughout the system's life span. The governance model highlights the systemic nature of AI governance and opens new research avenues into its practical implementation, the mechanisms that connect different AI governance layers, and the dynamics between the AI governance actors. The model also offers a starting point for organizational decision-makers to consider the governance components needed to ensure social acceptability, mitigate risks, and realize the potential of AI.
Plan and Design
Designing the AI system, defining requirements, and planning development
Deployer
Entity that integrates and deploys the AI system for end users
Map
Identifying and documenting AI risks, contexts, and impacts