This page is still being polished. If you have thoughts, please share them via the feedback form.
Data on this page is preliminary and may change. Please do not share or cite these figures publicly.
Governance frameworks, formal policies, and strategic alignment mechanisms.
Also in Oversight & Accountability
Ensuring that the algorithms used by an AI system are developed, operated, and monitored in alignment with the organization’s strategic goals and values.
Literature streams: Software development and project management (Dennehy & Conboy, 2018) Critical algorithm studies (Kitchin, 2017; Ziewitz, 2016)
Reasoning
Establishes governance framework aligning algorithm development and operation with organizational strategic goals and values.
Algorithm ID
Coordinated algorithm development, operation, and use are key to sustainable AI operations in organizations. While some algorithms are developed in-house and others procured from vendors, it is important that the organization is aware of the algorithms it is developing, operating, or using. All organizations using AI systems should operate an Algorithm Repository. The repository should 1) identify, to the extent possible, all algorithms the organization is developing, operates, uses, or has retired, 2) assign them a unique identifier, 3) contain the relevant documents the organization has produced or received on the algorithm.
2.1.3 Policies & ProceduresAlgorithm pre-design
Once an organization initiates an algorithm development project, it should perform a preliminary pre-design of the algorithm. The Head of AI should ensure that the organization 4) enters the algorithm into the Algorithm Repository, 5) assesses whether the algorithm can align with the organization's values and risk tolerance, 6) initiates the development processes and assigns roles and responsibilities, 7) identifies and documents the key features and design constraints for the planned algorithm.
2.1.2 Roles & AccountabilityAlgorithm use case design
Understanding the intended uses of an algorithm together with its possible misuses is key to sustainable AI development and use. For each algorithm in its Algorithm Repository, the organization should define and document, to the extent possible, 1) the intended uses of the algorithm and 2) the possible foreseeable misuses of the algorithm, if relevant. The use case definition should guide the development processes and build on the risk and impact pre-design and assessment outcomes. The AI System Owner should ensure that the intended use case aligns with the organization's values and risk tolerance. The AI System Owner should ensure that the organization takes adequate measures to prevent inappropriate algorithm misuse.
2.4.2 Design StandardsAlgorithm technical environment design
When operational, algorithms are typically part of AI systems. The AI system architecture and its connections to the organization's other IT systems affect the AI system's impacts. The organization should 1) define and document the position of the algorithm in the AI systems it is a part of, 2) document and manage interactions with the organization's other AI systems and IT systems. The AI System Owner should ensure that the AI system's technical environment aligns with the organization's values and risk tolerance.
2.1.3 Policies & ProceduresAlgorithm deployment metrics design
The organization can only ensure desired algorithm performance by designing appropriate metrics to evaluate it. The Algorithm Owner (T56) should ensure that the organization defines and documents predeployment performance metrics that the algorithm must meet prior to deployment or updates. The Algorithm Owner should ensure that the performance metrics align with the organization's values and risk tolerance.
2.2.2 Testing & EvaluationAlgorithm operational metrics design
The organization can only ensure desired algorithm performance by designing appropriate metrics to evaluate it. The Algorithm Owner should ensure that the organization defines and documents operational performance metrics for assessing algorithm performance during operational use. The Algorithm Owner should ensure that the performance metrics align with the organization's values and risk tolerance.
2.1.3 Policies & ProceduresAI system version control design
AI system algorithms will likely undergo several redesigns and update cycles during their lifetime. Some algorithms may learn continually. Designing and implementing an effective version control system integrated with the AI governance framework processes is crucial to sustainable AI operations. The AI System Owner should ensure that the organization defines, documents, and implements 3) quality control processes for new versions and updates and 4) version control and approval workflows. The Algorithm Owner should ensure that the AI system version control design and practices align with the organization's values and risk tolerance.
2.4.3 Development WorkflowsAlgorithm performance monitoring design
Monitoring algorithm performance is crucial to ensure that the organization sustains the desired level of operational performance. The monitoring must be systematic and metricsbased to achieve consistency over time. The AI System Owner should ensure that the organization defines, documents, and implements 1) workflows and technical interfaces to facilitate the monitoring of AI system performance, including for example 2) automated or manual production and reporting of performance metrics data, 3) alarm thresholds, and 4) workflows that allocate monitoring responsibilities. 1) workflows to address issues detected during regular monitoring and health checks. The Algorithm Owner should ensure that the AI system performance monitoring design process aligns with the organization's values and risk tolerance.
2.3.3 Monitoring & LoggingAlgorithm health checks design
Algorithms may be subject to performance deterioration over the medium and long term. In addition, the business, operational, IT, and regulatory environments and stakeholder pressures will change over time. These processes may jeopardize algorithm performance or lead to the emergence of unacceptable risks. The Algorithm Owner should ensure that the organization conducts regular comprehensive reviews of the algorithm (algorithm health checks) to ensure that the algorithm aligns with the organization's values and risk tolerance. The Algorithm Owner should ensure that the organization defines, documents, and implements workflows and technical interfaces to review 1) the AI system use case, 2) the AI system users, 3) the AI system operational environment, 4) the AI system technical environment, 5) the AI system deployment metrics, 6) the AI system operational use metrics, 7) the AI system version control practices, 8) the AI system performance monitoring practices and 9) the AI system health check practices. The reviews should assess whether the algorithm aligns with the organization's values and risk tolerance. If the review discloses misalignments, the Algorithm Owner initiates appropriate measures to regain alignment.
2.2.1 Risk AssessmentAlgorithm verification and validation
Verifying and validating algorithm performance is a crucial aspect of AI system development and quality control. In AI systems with machine learning components, verification will require comprehensive validation testing in addition to theoretical and analytical verification. In many cases, validation will require that the developer organization builds a simulation environment where it can explore algorithm performance using comprehensive samples of real-world, non-training data inputs. Further, validation may require developing post hoc interpretability tools to gain insight into algorithm logic. The Algorithm Owner should ensure that the organization develops appropriate verification and validation methods to ensure adequate algorithm performance.
2.2.2 Testing & EvaluationAlgorithm approval
Decisions approving algorithms and algorithm versions for operational use should be informed and preceded by a careful review of the algorithm. Prior to deciding to deploy an algorithm or algorithm version, the AI owner should review all documentation on the algorithm and associated and ensure that the algorithm meets the performance targets for deployment. At times, the organization may have limited access to the algorithms in its AI system. In these cases, the approval process should review all available documentation and make a decision on whether deploying the algorithm creates risks that do exceed the organization's risk tolerance or breach its legal obligations.
2.2.2 Testing & EvaluationAlgorithm health checks
The Algorithm Owner should ensure that the organization performs the regular planned health checks. The reviews should assess whether the AI system aligns with the organization's values and risk tolerance. If a review discloses a misalignment, the AI System Owner should initiate appropriate measures to regain alignment.
2.3.3 Monitoring & LoggingAlgorithm performance monitoring
The Algorithm Owner should ensure that the organization implements the planned algorithm performance monitoring processes. If the performance monitoring processes disclose a breach of performance standards or indicate a value or risk tolerance misalignment, the AI System Owner should initiate appropriate measures to address the breach or regain alignment.
2.3.3 Monitoring & LoggingAlgorithm version control
The Algorithm Owner should ensure that the organization implements the planned AI system version control processes. If the system version control processes disclose a breach of version control practices or indicate a value or risk tolerance, the AI System Owner should initiate appropriate measures to address the breach or regain alignment.
2.4.3 Development WorkflowsAI System
Ensuring that the AI system is developed, operated, and monitored in alignment with the organization’s strategic goals and values.
2.1.3 Policies & ProceduresAI System > AI system repository and AI ID
Coordinated AI development, operation, and use are essential to organizations' sustainable AI operations. All organizations using AI systems should operate an AI system repository. The repository should 1) identify all AI systems the organization is developing, operates, uses, or has retired, 2) assign them a unique identifier, 3) contain the relevant documents the organization has produced or received on the AI system.
2.1.3 Policies & ProceduresAI System > AI system pre-design
Once an organization initiates an AI system development project, it should perform a preliminary pre-design of the system. The Head of AI (T54) should ensure that the organization 1) enters the AI system into the AI repository (T1), 2) assesses whether the AI system can align with the organization's values and risk tolerance, 3) initiates the development processes and assigns roles and responsibilities, 4) identifies and documents the planned AI system's key features and design constraints.
2.1.3 Policies & ProceduresAI System > AI system use case
Identifying and understanding the intended use case of an AI system and its other possible uses is key to sustainable AI development and use. The use case affects the system's regulatory environment and may have significant reputational risk implications. The AI System Owner (T55) should ensure that the organization defines and documents 1) the intended use case of the AI system and 2) the possible other uses of the AI system. The AI System Owner should ensure that the use case definition aligns with the organization's values and risk tolerance. The AI System Owner should ensure that the organization takes adequate measures to prevent inappropriate AI system misuse.
2.1.3 Policies & ProceduresAI System > AI system user
People in organizations use AI systems. Some AI systems make decisions that directly or indirectly affect humans and their rights and obligations (affected persons). Sustainable AI system development and use require that the organization is conscious of who is using the AI system and whose rights and obligations it may affect. The organization should define and document 1) the intended AI system user organizations and human users, 2) the intended affected persons, and 3) possible other users and affected persons. The AI System Owner (T55) should ensure that the user definitions align with the organization's values and risk tolerance.
2.1.3 Policies & ProceduresAI System > AI system operating environment
AI systems are embedded in the business and organizational environment. This environment typically consists of technological and social elements. The operating environment is a key driver of AI system impacts. The organization should define and document 1) the intended business or operational model and environment of the AI system, 2) the intended IT environment the AI system is embedded in and interacts with, 3) the other intended AI systems the AI system interacts with.
2.1.3 Policies & ProceduresPutting AI Ethics into Practice: The Hourglass Model of Organizational AI Governance
Mäntymäki, Matti; Minkkinen, Matti; Birkstedt, Teemu; Viljanen, Mika (2022)
The organizational use of artificial intelligence (AI) has rapidly spread across various sectors. Alongside the awareness of the benefits brought by AI, there is a growing consensus on the necessity of tackling the risks and potential harms, such as bias and discrimination, brought about by advanced AI technologies. A multitude of AI ethics principles have been proposed to tackle these risks, but the outlines of organizational processes and practices for ensuring socially responsible AI development are in a nascent state. To address the paucity of comprehensive governance models, we present an AI governance framework, the hourglass model of organizational AI governance, which targets organizations that develop and use AI systems. The framework is designed to help organizations deploying AI systems translate ethical AI principles into practice and align their AI systems and processes with the forthcoming European AI Act. The hourglass framework includes governance requirements at the environmental, organizational, and AI system levels. At the AI system level, we connect governance requirements to AI system life cycles to ensure governance throughout the system's life span. The governance model highlights the systemic nature of AI governance and opens new research avenues into its practical implementation, the mechanisms that connect different AI governance layers, and the dynamics between the AI governance actors. The model also offers a starting point for organizational decision-makers to consider the governance components needed to ensure social acceptability, mitigate risks, and realize the potential of AI.
Other (multiple stages)
Applies across multiple lifecycle stages
Deployer
Entity that integrates and deploys the AI system for end users
Unable to classify
Could not be classified to a specific AIRM function