This page is still being polished. If you have thoughts, please share them via the feedback form.
Data on this page is preliminary and may change. Please do not share or cite these figures publicly.
Governance frameworks, formal policies, and strategic alignment mechanisms.
Also in Oversight & Accountability
Ensuring that the AI system is developed, operated, and monitored in alignment with the organization’s strategic goals and values.
Literature streams: Software development and project management (Dennehy & Conboy, 2018)
Reasoning
Establishes strategic governance framework aligning AI development, operation, and monitoring with organizational values and goals.
AI system repository and AI ID
Coordinated AI development, operation, and use are essential to organizations' sustainable AI operations. All organizations using AI systems should operate an AI system repository. The repository should 1) identify all AI systems the organization is developing, operates, uses, or has retired, 2) assign them a unique identifier, 3) contain the relevant documents the organization has produced or received on the AI system.
2.1.3 Policies & ProceduresAI system use case
Identifying and understanding the intended use case of an AI system and its other possible uses is key to sustainable AI development and use. The use case affects the system's regulatory environment and may have significant reputational risk implications. The AI System Owner (T55) should ensure that the organization defines and documents 1) the intended use case of the AI system and 2) the possible other uses of the AI system. The AI System Owner should ensure that the use case definition aligns with the organization's values and risk tolerance. The AI System Owner should ensure that the organization takes adequate measures to prevent inappropriate AI system misuse.
2.1.3 Policies & ProceduresAI system pre-design
Once an organization initiates an AI system development project, it should perform a preliminary pre-design of the system. The Head of AI (T54) should ensure that the organization 1) enters the AI system into the AI repository (T1), 2) assesses whether the AI system can align with the organization's values and risk tolerance, 3) initiates the development processes and assigns roles and responsibilities, 4) identifies and documents the planned AI system's key features and design constraints.
2.1.3 Policies & ProceduresAI system user
People in organizations use AI systems. Some AI systems make decisions that directly or indirectly affect humans and their rights and obligations (affected persons). Sustainable AI system development and use require that the organization is conscious of who is using the AI system and whose rights and obligations it may affect. The organization should define and document 1) the intended AI system user organizations and human users, 2) the intended affected persons, and 3) possible other users and affected persons. The AI System Owner (T55) should ensure that the user definitions align with the organization's values and risk tolerance.
2.1.3 Policies & ProceduresAI system operating environment
AI systems are embedded in the business and organizational environment. This environment typically consists of technological and social elements. The operating environment is a key driver of AI system impacts. The organization should define and document 1) the intended business or operational model and environment of the AI system, 2) the intended IT environment the AI system is embedded in and interacts with, 3) the other intended AI systems the AI system interacts with.
2.1.3 Policies & ProceduresAI system operational metrics
Organizations can only assure desired AI system performance by deploying appropriate metrics to evaluate it. The AI System Owner should ensure that the organization defines and documents operational use performance metrics for assessing AI system performance during its operational use. The AI System Owner should ensure that the key target performance metrics align with the organization's values and risk tolerance.
2.2.2 Testing & EvaluationAI system deployment metrics
The organization can only ensure desired AI system performance by deploying appropriate metrics to evaluate it. The AI System Owner should ensure that the organization defines and documents predeployment performance metrics that the AI system must meet before deployment or updates. The AI System Owner should ensure that the performance metrics align with the organization's values and risk tolerance.
2.2.2 Testing & EvaluationAI system architecture
AI systems are part of IT systems. IT systems contain various data, computing infrastructures and resources, system architecture, and interfaces. The IT system architecture affects the AI system operations and may affect its risks and impacts. The organization should 1) define and document the position of the AI system in the organization's IT architecture, and 2) document and manage the AI system's interactions with the organization's other IT systems.
2.1.3 Policies & ProceduresAI system version control
AI systems will likely undergo several redesigns and update cycles during their lifetime. Designing and implementing an effective version control system integrated with the AI governance framework processes is crucial. The AI System Owner should ensure that the organization defines, documents, and entrenches 1) quality control processes for new versions and updates and 1) version control and approval workflows. The AI System Owner should ensure that the AI system version control design aligns with the organization's values and risk tolerance.
2.4.3 Development WorkflowsAI system performance monitoring design
Monitoring AI system performance is crucial to ensuring that the system sustains the desired level of performance. The monitoring must be systematic and metrics-based to achieve consistency over time. The AI System Owner should ensure that the organization defines, documents, and entrenches 1) workflows and technical interfaces to facilitate the monitoring of AI system performance, including, for example a) the automated or manual production and reporting of performance metrics data, b) alarm thresholds, and c) workflows that allocate monitoring responsibilities. 2) workflows and technical interfaces to facilitate the detection of system malfunctions and other anomalous events, and 3) workflows to address issues detected during health checks. The AI System Owner should ensure that the AI system performance monitoring process aligns with the organization's values and risk tolerance.
2.3.3 Monitoring & LoggingAI system health check
AI systems may be subject to performance deterioration over the medium and long term. In addition, the business, operational, IT, and regulatory environments and stakeholder pressures will change over time. These processes may jeopardize system performance or lead to unacceptable risks. The AI System Owner should ensure that the organization conducts regular comprehensive reviews of the AI system (health checks) to ensure that the AI system aligns with the organization's values and risk tolerance. The AI System Owner should ensure that the organization defines, documents, and entrenches workflows and technical interfaces for reviewing 1) AI system use case, 2) AI system users, 3) AI system operational environment, 4) AI system technical environment, 5) AI system metrics, 6) AI system version control practices, 7) the AI system monitoring practices and 8) the AI system health check practices.
2.2.1 Risk AssessmentAI system verification and validation
Verifying and validating AI system performance is a crucial aspect of AI system development and quality control. In AI systems with machine learning components, verification will require comprehensive validation testing and theoretical and analytical verification. In many cases, validation will require that the developer organization builds a simulation environment where it can explore algorithm performance using comprehensive samples of real-world, non-training data inputs. The AI System Owner should ensure that the organization develops appropriate verification and validation methods for adequate AI system performance.
2.2.2 Testing & EvaluationAI system approval
Decisions on approving AI systems and AI versions of AI systems for operational use should be informed and preceded by a careful review of the AI system documentation. Before deciding to deploy an AI system or AI system version, the AI System Owner should review all documentation on the AI system and ensure that the AI system impacts are acceptable and the system meets the performance targets for deployment.
2.1.3 Policies & ProceduresAI system version control
The AI System Owner should ensure that the organization implements the planned AI system version control processes. If the system version control processes disclose a breach of version control practices or indicate a value or risk tolerance, the AI System Owner should initiate appropriate measures to address the breach or regain alignment.
2.4.3 Development WorkflowsAI system performance monitoring
The AI System Owner should ensure that the organization implements the planned AI system performance monitoring processes. If the system version control processes disclose a breach of performance standards or indicate a value or risk tolerance misalignment, the AI System Owner should initiate appropriate measures to address the breach or regain alignment.
2.3.3 Monitoring & LoggingAI system health checks
The AI System Owner should ensure that the organization performs the regular planned health checks. The reviews should assess whether the AI system aligns with the organization's values and risk tolerance. If a review discloses a misalignment, the AI System Owner should initiate appropriate measures to regain alignment.
2.3.3 Monitoring & LoggingAlgorithms
Ensuring that the algorithms used by an AI system are developed, operated, and monitored in alignment with the organization’s strategic goals and values.
2.1.3 Policies & ProceduresAlgorithms > Algorithm ID
Coordinated algorithm development, operation, and use are key to sustainable AI operations in organizations. While some algorithms are developed in-house and others procured from vendors, it is important that the organization is aware of the algorithms it is developing, operating, or using. All organizations using AI systems should operate an Algorithm Repository. The repository should 1) identify, to the extent possible, all algorithms the organization is developing, operates, uses, or has retired, 2) assign them a unique identifier, 3) contain the relevant documents the organization has produced or received on the algorithm.
2.1.3 Policies & ProceduresAlgorithms > Algorithm pre-design
Once an organization initiates an algorithm development project, it should perform a preliminary pre-design of the algorithm. The Head of AI should ensure that the organization 4) enters the algorithm into the Algorithm Repository, 5) assesses whether the algorithm can align with the organization's values and risk tolerance, 6) initiates the development processes and assigns roles and responsibilities, 7) identifies and documents the key features and design constraints for the planned algorithm.
2.1.2 Roles & AccountabilityAlgorithms > Algorithm use case design
Understanding the intended uses of an algorithm together with its possible misuses is key to sustainable AI development and use. For each algorithm in its Algorithm Repository, the organization should define and document, to the extent possible, 1) the intended uses of the algorithm and 2) the possible foreseeable misuses of the algorithm, if relevant. The use case definition should guide the development processes and build on the risk and impact pre-design and assessment outcomes. The AI System Owner should ensure that the intended use case aligns with the organization's values and risk tolerance. The AI System Owner should ensure that the organization takes adequate measures to prevent inappropriate algorithm misuse.
2.4.2 Design StandardsAlgorithms > Algorithm technical environment design
When operational, algorithms are typically part of AI systems. The AI system architecture and its connections to the organization's other IT systems affect the AI system's impacts. The organization should 1) define and document the position of the algorithm in the AI systems it is a part of, 2) document and manage interactions with the organization's other AI systems and IT systems. The AI System Owner should ensure that the AI system's technical environment aligns with the organization's values and risk tolerance.
2.1.3 Policies & ProceduresAlgorithms > Algorithm deployment metrics design
The organization can only ensure desired algorithm performance by designing appropriate metrics to evaluate it. The Algorithm Owner (T56) should ensure that the organization defines and documents predeployment performance metrics that the algorithm must meet prior to deployment or updates. The Algorithm Owner should ensure that the performance metrics align with the organization's values and risk tolerance.
2.2.2 Testing & EvaluationPutting AI Ethics into Practice: The Hourglass Model of Organizational AI Governance
Mäntymäki, Matti; Minkkinen, Matti; Birkstedt, Teemu; Viljanen, Mika (2022)
The organizational use of artificial intelligence (AI) has rapidly spread across various sectors. Alongside the awareness of the benefits brought by AI, there is a growing consensus on the necessity of tackling the risks and potential harms, such as bias and discrimination, brought about by advanced AI technologies. A multitude of AI ethics principles have been proposed to tackle these risks, but the outlines of organizational processes and practices for ensuring socially responsible AI development are in a nascent state. To address the paucity of comprehensive governance models, we present an AI governance framework, the hourglass model of organizational AI governance, which targets organizations that develop and use AI systems. The framework is designed to help organizations deploying AI systems translate ethical AI principles into practice and align their AI systems and processes with the forthcoming European AI Act. The hourglass framework includes governance requirements at the environmental, organizational, and AI system levels. At the AI system level, we connect governance requirements to AI system life cycles to ensure governance throughout the system's life span. The governance model highlights the systemic nature of AI governance and opens new research avenues into its practical implementation, the mechanisms that connect different AI governance layers, and the dynamics between the AI governance actors. The model also offers a starting point for organizational decision-makers to consider the governance components needed to ensure social acceptability, mitigate risks, and realize the potential of AI.
Other (multiple stages)
Applies across multiple lifecycle stages
Deployer
Entity that integrates and deploys the AI system for end users
Unable to classify
Could not be classified to a specific AIRM function
Primary
6.5 Governance failure