This page is still being polished. If you have thoughts, please share them via the feedback form.
Data on this page is preliminary and may change. Please do not share or cite these figures publicly.
Governance frameworks, formal policies, and strategic alignment mechanisms.
Also in Oversight & Accountability
Ensuring that data are sourced, used, and monitored in alignment with the organization’s strategic goals and values.
Literature streams: Data governance and data management (Abraham et al., 2019; Brous et al., 2016; Janssen et al., 2020) Critical data studies (Iliadis & Russo, 2016)
Reasoning
Data governance policies establish standards for sourcing, using, and monitoring data aligned with organizational values.
Data sourcing
Data is crucial to both AI systems and algorithm development and operations. The AI System Owner Sources should ensure that the organization defines and documents AI system data sources. The AI System Owner should ensure that the organization has the right to process the data. The Algorithm Owner should ensure that the organization defines and documents training, validation, and operational data sources. The Algorithm Owner should ensure that the organization has the right to process the algorithm training, validation, and operational data.
2.1.3 Policies & ProceduresData ontologies, inferences, and proxies
Data resources contain various categories of data. The categories reflect explicit or implicit data ontologies. Data ontologies consist of entity taxonomies (what entities are assumed to exist) and models of entity relationalities and causality (how the entities relate to each other). Data ontologies may have significant implications on how algorithms and AI systems function, what risks they create and to whom, and what entities and how the AI system affects them. In advanced machine learning approaches, data ontologies are complex as the source data ontologies combine with the non-representational sensemaking inherent to the approaches. Understanding the ontologies may only be possible by analyzing algorithm outputs. The AI System Owner should ensure that the organization 10) adequately understands the AI system data ontology, 11) has explored the risks related to possible inconclusive evidence, system, and discrimination risk, and 12) develops and implements measures to minimize and mitigate possible data-related risks. The Algorithm Owner should ensure that the organization 1) adequately understands the algorithm data ontology, 2) adequately understands what inferences are drawn on the data and what proxies are created when the organization uses a machine learning approach to develop an algorithm, 3) has explored the risks related to possible inconclusive evidence, system bias, and discrimination risks the data ontology may create, and 4) develops and implements measures to minimize and mitigate possible data ontologyrelated risks. In particular, if the AI system makes decisions that affect natural persons, the AI system owner should ensure that the organization conducts a comprehensive assessment of the AI system’s discrimination, misidentification, and cultural sensitivity risks. The AI System Owner and Algorithm owner should ensure that the residual risks are acceptable and align with the organization's values and risk tolerance.
2.2.1 Risk AssessmentData quality metrics
The organization can only ensure desired AI system performance by designing appropriate metrics to evaluate data quality. The AI System Owner should ensure that the organization defines and documents data quality metrics for assessing the quality of the data the AI system uses. The Algorithm Owner should ensure that the performance metrics align with the organization's values and risk tolerance.
2.2.2 Testing & EvaluationData quality assurance
Adequate data quality is a crucial precondition to all AI system operations. The AI System Owner should ensure that the organization designs and entrenches appropriate workflows and technical arrangements for 1) gathering and producing information on data quality, and 2) ensuring that the data (including the training, validation, and testing data) is of adequate quality and sufficiently relevant, complete, and representative. Data quality analyses should also include an analysis of additional data needs.
2.4.3 Development WorkflowsData preprocessing
Training, testing, and operation undergo preprocessing in many AI systems. The AI System Owner and Algorithm Owner should ensure that the organization designs and implements appropriate workflows and technical interfaces for effective and appropriate data preprocessing. As training data, validation data, and operational data often differ qualitatively, the Algorithm Owner should ensure that the organization understands the differences and designs and implements appropriate workflows and interfaces for preprocessing each data category. The AI System Owner should ensure that the data preprocessing process aligns with the organization's values and risk tolerance
2.4.3 Development WorkflowsData quality monitoring
The AI System Owner should ensure that the organization implements the planned data quality processes. If the data quality control processes disclose a breach of data quality standards, data drift, or indicate a value or risk tolerance misalignment, the AI System Owner should initiate appropriate measures to address the breach or regain alignment.
2.3.3 Monitoring & LoggingData health check design
Data resources may be subject to deterioration over the medium and long term. In addition, the business, operational, IT, and regulatory environments and stakeholder pressures will change over time. These processes may jeopardize data access or data quality and lead to unacceptable risks. The AI System Owner should ensure that the organization designs processes for regular comprehensive reviews of the AI system resources (Data health checks) to ensure that the data-related risks are acceptable and align with the organization's values and risk tolerance. The AI System Owner should ensure that the organization defines, documents, and entrenches workflows and technical interfaces to review 1) AI system and algorithm data sources, 2) data preprocessing practices, 3) data quality, and 4) data ontology, inferences, and proxies.
2.2.1 Risk AssessmentData quality monitoring design
Monitoring AI system data quality is crucial to ensuring that the AI system sustains the desired level of operational performance. Data quality monitoring must be systematic and metricsbased to achieve consistency over time. The AI System Owner should ensure that the organization defines, documents, and entrenches workflows and technical interfaces to facilitate the monitoring of data quality. In particular, the AI System Owner should identify anomalous data entries and data drift. including 1) automated or manual production and reporting of data quality indicators, alarm thresholds, and 2) workflows that allocate monitoring responsibilities. 3) workflows to address issues detected during regular monitoring. The Algorithm Owner should ensure that the data quality design process aligns with the organization's values and risk tolerance.
2.3.3 Monitoring & LoggingData health checks
The Algorithm Owner should ensure that the organization performs the regular planned health checks. The reviews should assess whether the AI system data resources and data-related processes align with the organization's values and risk tolerance. If a review discloses a misalignment, the AI System Owner should initiate appropriate measures to regain alignment.
2.2.1 Risk AssessmentAI System
Ensuring that the AI system is developed, operated, and monitored in alignment with the organization’s strategic goals and values.
2.1.3 Policies & ProceduresAI System > AI system repository and AI ID
Coordinated AI development, operation, and use are essential to organizations' sustainable AI operations. All organizations using AI systems should operate an AI system repository. The repository should 1) identify all AI systems the organization is developing, operates, uses, or has retired, 2) assign them a unique identifier, 3) contain the relevant documents the organization has produced or received on the AI system.
2.1.3 Policies & ProceduresAI System > AI system pre-design
Once an organization initiates an AI system development project, it should perform a preliminary pre-design of the system. The Head of AI (T54) should ensure that the organization 1) enters the AI system into the AI repository (T1), 2) assesses whether the AI system can align with the organization's values and risk tolerance, 3) initiates the development processes and assigns roles and responsibilities, 4) identifies and documents the planned AI system's key features and design constraints.
2.1.3 Policies & ProceduresAI System > AI system use case
Identifying and understanding the intended use case of an AI system and its other possible uses is key to sustainable AI development and use. The use case affects the system's regulatory environment and may have significant reputational risk implications. The AI System Owner (T55) should ensure that the organization defines and documents 1) the intended use case of the AI system and 2) the possible other uses of the AI system. The AI System Owner should ensure that the use case definition aligns with the organization's values and risk tolerance. The AI System Owner should ensure that the organization takes adequate measures to prevent inappropriate AI system misuse.
2.1.3 Policies & ProceduresAI System > AI system user
People in organizations use AI systems. Some AI systems make decisions that directly or indirectly affect humans and their rights and obligations (affected persons). Sustainable AI system development and use require that the organization is conscious of who is using the AI system and whose rights and obligations it may affect. The organization should define and document 1) the intended AI system user organizations and human users, 2) the intended affected persons, and 3) possible other users and affected persons. The AI System Owner (T55) should ensure that the user definitions align with the organization's values and risk tolerance.
2.1.3 Policies & ProceduresAI System > AI system operating environment
AI systems are embedded in the business and organizational environment. This environment typically consists of technological and social elements. The operating environment is a key driver of AI system impacts. The organization should define and document 1) the intended business or operational model and environment of the AI system, 2) the intended IT environment the AI system is embedded in and interacts with, 3) the other intended AI systems the AI system interacts with.
2.1.3 Policies & ProceduresPutting AI Ethics into Practice: The Hourglass Model of Organizational AI Governance
Mäntymäki, Matti; Minkkinen, Matti; Birkstedt, Teemu; Viljanen, Mika (2022)
The organizational use of artificial intelligence (AI) has rapidly spread across various sectors. Alongside the awareness of the benefits brought by AI, there is a growing consensus on the necessity of tackling the risks and potential harms, such as bias and discrimination, brought about by advanced AI technologies. A multitude of AI ethics principles have been proposed to tackle these risks, but the outlines of organizational processes and practices for ensuring socially responsible AI development are in a nascent state. To address the paucity of comprehensive governance models, we present an AI governance framework, the hourglass model of organizational AI governance, which targets organizations that develop and use AI systems. The framework is designed to help organizations deploying AI systems translate ethical AI principles into practice and align their AI systems and processes with the forthcoming European AI Act. The hourglass framework includes governance requirements at the environmental, organizational, and AI system levels. At the AI system level, we connect governance requirements to AI system life cycles to ensure governance throughout the system's life span. The governance model highlights the systemic nature of AI governance and opens new research avenues into its practical implementation, the mechanisms that connect different AI governance layers, and the dynamics between the AI governance actors. The model also offers a starting point for organizational decision-makers to consider the governance components needed to ensure social acceptability, mitigate risks, and realize the potential of AI.
Other (multiple stages)
Applies across multiple lifecycle stages
Deployer
Entity that integrates and deploys the AI system for end users
Unable to classify
Could not be classified to a specific AIRM function