This page is still being polished. If you have thoughts, please share them via the feedback form.
Data on this page is preliminary and may change. Please do not share or cite these figures publicly.
Independent audits, third-party reviews, and regulatory compliance verification.
Also in Risk & Assurance
Understanding the regulatory environment of an AI system and ensuring its compliance with the relevant regulations
Regulatory and legal studies (Kaminski & Malgieri, 2020; Viljanen & Parviainen, 2022)
Reasoning
Organization ensures AI system adherence to existing regulatory requirements through compliance verification.
Regulatory canvassing
AI systems are typically subject to a variety of regulatory instruments that may force particular design choices, constrain functionalities, or in extreme cases make implementing a specific design, use case, or business model impossible. Understanding the regulatory environment is, consequently, important to prevent misplaced investments. To develop a preliminary understanding of the AI system regulatory environment, the AI System Owner should ensure that the organization conducts a regulatory environment canvassing. The regulatory canvassing provides the organization with basic information on the regulatory environment in which the AI system will be used. The canvassing should ensure that laws and regulations that may affect the AI system are identified and reviewed. The canvassing should develop a knowledge base of the contents of the primary regulatory instruments and key constraints that could affect AI system design and operations. All parties active in developing or implementing an AI system within the organization should be aware of the findings of the canvassing process.
2.2.1 Risk AssessmentRegulatory risks, constraints, and design parameter analysis
Regulation may impose critical constraints and requirements on AI system design. Once a tentative understanding of the future intended use case and users of the AI system is reached, the AI System Owner should ensure that the legal function conducts an in-depth analysis of the system and its regulatory environment to identify key regulatory risks, constraints, and design parameters. The analysis should 4) assess regulatory risks associated with known design options, 5) identify key design constraints, 6) identify design areas with significant regulatory implications (key regulatory issues), and 7) outline possible design options and their implications. These regulatory focal points should be clearly communicated to all parties active in developing or implementing an AI system within the organization.
2.2.1 Risk AssessmentRegulatory design review
Development investments may be lost if the AI system has non-compliant features. To ensure efficient resource allocation and prevent investment slippage, the legal function should be consulted prior to making design decisions that could affect the key regulatory focal points. the AI System Owner should ensure that developers consult the legal function before significant decisions affecting key regulatory issues are made.
2.1.3 Policies & ProceduresCompliance monitoring design
The regulatory environment will likely change during AI system lifetime. Changes to the AI system may also disrupt compliance. The AI System Owner should ensure that the organization maintains an awareness of possible regulatory changes relevant to the AI system.' The organization should develop and entrench appropriate workflows and technical interfaces to facilitate compliance monitoring.
2.3.3 Monitoring & LoggingCompliance health check design
The regulatory environment will likely change during the AI system's lifecycle. Changes to the AI system may also disrupt compliance. The AI System Owner should ensure that the organization conducts regular comprehensive reviews of AI system compliance. The organization should develop and entrench appropriate workflows and technical interfaces to facilitate periodic compliance reviews.
2.2.3 Auditing & ComplianceCompliance assessment
The AI System Owner must ensure that the legal function conducts and documents a compliance assessment before the AI system is approved for operational use or a materially new version is deployed.
2.2.3 Auditing & ComplianceCompliance monitoring
The AI System Owner should ensure that the organization implements the planned AI system compliance monitoring processes. If the system version control processes disclose a non-compliance event, the AI System Owner should initiate appropriate measures to address the breach or regain alignment.
2.3.3 Monitoring & LoggingCompliance health checks
The AI System Owner should ensure that the organization performs the regular planned compliance health checks. The reviews should assess whether the AI system compliance processes align with the organization's values and risk tolerance. If a review discloses a misalignment, the AI System Owner should initiate appropriate measures to regain alignment.
2.2.3 Auditing & ComplianceAI System
Ensuring that the AI system is developed, operated, and monitored in alignment with the organization’s strategic goals and values.
2.1.3 Policies & ProceduresAI System > AI system repository and AI ID
Coordinated AI development, operation, and use are essential to organizations' sustainable AI operations. All organizations using AI systems should operate an AI system repository. The repository should 1) identify all AI systems the organization is developing, operates, uses, or has retired, 2) assign them a unique identifier, 3) contain the relevant documents the organization has produced or received on the AI system.
2.1.3 Policies & ProceduresAI System > AI system pre-design
Once an organization initiates an AI system development project, it should perform a preliminary pre-design of the system. The Head of AI (T54) should ensure that the organization 1) enters the AI system into the AI repository (T1), 2) assesses whether the AI system can align with the organization's values and risk tolerance, 3) initiates the development processes and assigns roles and responsibilities, 4) identifies and documents the planned AI system's key features and design constraints.
2.1.3 Policies & ProceduresAI System > AI system use case
Identifying and understanding the intended use case of an AI system and its other possible uses is key to sustainable AI development and use. The use case affects the system's regulatory environment and may have significant reputational risk implications. The AI System Owner (T55) should ensure that the organization defines and documents 1) the intended use case of the AI system and 2) the possible other uses of the AI system. The AI System Owner should ensure that the use case definition aligns with the organization's values and risk tolerance. The AI System Owner should ensure that the organization takes adequate measures to prevent inappropriate AI system misuse.
2.1.3 Policies & ProceduresAI System > AI system user
People in organizations use AI systems. Some AI systems make decisions that directly or indirectly affect humans and their rights and obligations (affected persons). Sustainable AI system development and use require that the organization is conscious of who is using the AI system and whose rights and obligations it may affect. The organization should define and document 1) the intended AI system user organizations and human users, 2) the intended affected persons, and 3) possible other users and affected persons. The AI System Owner (T55) should ensure that the user definitions align with the organization's values and risk tolerance.
2.1.3 Policies & ProceduresAI System > AI system operating environment
AI systems are embedded in the business and organizational environment. This environment typically consists of technological and social elements. The operating environment is a key driver of AI system impacts. The organization should define and document 1) the intended business or operational model and environment of the AI system, 2) the intended IT environment the AI system is embedded in and interacts with, 3) the other intended AI systems the AI system interacts with.
2.1.3 Policies & ProceduresPutting AI Ethics into Practice: The Hourglass Model of Organizational AI Governance
Mäntymäki, Matti; Minkkinen, Matti; Birkstedt, Teemu; Viljanen, Mika (2022)
The organizational use of artificial intelligence (AI) has rapidly spread across various sectors. Alongside the awareness of the benefits brought by AI, there is a growing consensus on the necessity of tackling the risks and potential harms, such as bias and discrimination, brought about by advanced AI technologies. A multitude of AI ethics principles have been proposed to tackle these risks, but the outlines of organizational processes and practices for ensuring socially responsible AI development are in a nascent state. To address the paucity of comprehensive governance models, we present an AI governance framework, the hourglass model of organizational AI governance, which targets organizations that develop and use AI systems. The framework is designed to help organizations deploying AI systems translate ethical AI principles into practice and align their AI systems and processes with the forthcoming European AI Act. The hourglass framework includes governance requirements at the environmental, organizational, and AI system levels. At the AI system level, we connect governance requirements to AI system life cycles to ensure governance throughout the system's life span. The governance model highlights the systemic nature of AI governance and opens new research avenues into its practical implementation, the mechanisms that connect different AI governance layers, and the dynamics between the AI governance actors. The model also offers a starting point for organizational decision-makers to consider the governance components needed to ensure social acceptability, mitigate risks, and realize the potential of AI.
Other (multiple stages)
Applies across multiple lifecycle stages
Deployer
Entity that integrates and deploys the AI system for end users
Unable to classify
Could not be classified to a specific AIRM function
Primary
6.5 Governance failure