This page is still being polished. If you have thoughts, please share them via the feedback form.
Data on this page is preliminary and may change. Please do not share or cite these figures publicly.
Independent audits, third-party reviews, and regulatory compliance verification.
Also in Risk & Assurance
Many jurisdictions have non-discrimination laws and impose equal treatment requirements. Non-compliance with the non-discrimination laws and equal treatment requirements is incompatible with sustainable AI operations and may create significant legal and reputational risks. The AI System Owner should ensure that the organization conducts and documents a nondiscrimination assurance process to ensure that the AI system outputs are compliant with non-discrimination laws and equal treatment requirements. The legal advisory function should be involved in both designing and conducting the assurance. Ensuring that an AI system creates no discrimination risk is challenging due to the nature of the non-discrimination and equal treatment. For example, under the Finnish Equality Act, an AI system would directly discriminate against a person if the system threated a person less favorably than other based on their age, nationality, language, religion, belief, opinion, political activity, trade union activity, family relationships, state of health, disability, sexual orientation, or other personal characteristics. Less favorable treatment is discrimination even if based on an apparently neutral rule. Despite the prima facie ban, differential treatment can be justified if mandated by law or the treatment has an acceptable objective in terms of basic and human rights, and the measures to attain the aim are proportionate. Conducting a diligent non-discrimination assurance is particularly important for AI systems with algorithms developed using machine learning approaches. Machine learning approaches may result in inadvertent discrimination. As the algorithms are often unexplainable, detecting discriminatory bias may require the use of post-hoc analysis tools and real-world data AI system output testing.
D1. Planning and design
Reasoning
Organization conducts post-hoc analysis and real-world testing to evaluate AI outputs for non-discrimination compliance.
AI System
Ensuring that the AI system is developed, operated, and monitored in alignment with the organization’s strategic goals and values.
2.1.3 Policies & ProceduresAI System > AI system repository and AI ID
Coordinated AI development, operation, and use are essential to organizations' sustainable AI operations. All organizations using AI systems should operate an AI system repository. The repository should 1) identify all AI systems the organization is developing, operates, uses, or has retired, 2) assign them a unique identifier, 3) contain the relevant documents the organization has produced or received on the AI system.
2.1.3 Policies & ProceduresAI System > AI system pre-design
Once an organization initiates an AI system development project, it should perform a preliminary pre-design of the system. The Head of AI (T54) should ensure that the organization 1) enters the AI system into the AI repository (T1), 2) assesses whether the AI system can align with the organization's values and risk tolerance, 3) initiates the development processes and assigns roles and responsibilities, 4) identifies and documents the planned AI system's key features and design constraints.
2.1.3 Policies & ProceduresAI System > AI system use case
Identifying and understanding the intended use case of an AI system and its other possible uses is key to sustainable AI development and use. The use case affects the system's regulatory environment and may have significant reputational risk implications. The AI System Owner (T55) should ensure that the organization defines and documents 1) the intended use case of the AI system and 2) the possible other uses of the AI system. The AI System Owner should ensure that the use case definition aligns with the organization's values and risk tolerance. The AI System Owner should ensure that the organization takes adequate measures to prevent inappropriate AI system misuse.
2.1.3 Policies & ProceduresAI System > AI system user
People in organizations use AI systems. Some AI systems make decisions that directly or indirectly affect humans and their rights and obligations (affected persons). Sustainable AI system development and use require that the organization is conscious of who is using the AI system and whose rights and obligations it may affect. The organization should define and document 1) the intended AI system user organizations and human users, 2) the intended affected persons, and 3) possible other users and affected persons. The AI System Owner (T55) should ensure that the user definitions align with the organization's values and risk tolerance.
2.1.3 Policies & ProceduresAI System > AI system operating environment
AI systems are embedded in the business and organizational environment. This environment typically consists of technological and social elements. The operating environment is a key driver of AI system impacts. The organization should define and document 1) the intended business or operational model and environment of the AI system, 2) the intended IT environment the AI system is embedded in and interacts with, 3) the other intended AI systems the AI system interacts with.
2.1.3 Policies & ProceduresPutting AI Ethics into Practice: The Hourglass Model of Organizational AI Governance
Mäntymäki, Matti; Minkkinen, Matti; Birkstedt, Teemu; Viljanen, Mika (2022)
The organizational use of artificial intelligence (AI) has rapidly spread across various sectors. Alongside the awareness of the benefits brought by AI, there is a growing consensus on the necessity of tackling the risks and potential harms, such as bias and discrimination, brought about by advanced AI technologies. A multitude of AI ethics principles have been proposed to tackle these risks, but the outlines of organizational processes and practices for ensuring socially responsible AI development are in a nascent state. To address the paucity of comprehensive governance models, we present an AI governance framework, the hourglass model of organizational AI governance, which targets organizations that develop and use AI systems. The framework is designed to help organizations deploying AI systems translate ethical AI principles into practice and align their AI systems and processes with the forthcoming European AI Act. The hourglass framework includes governance requirements at the environmental, organizational, and AI system levels. At the AI system level, we connect governance requirements to AI system life cycles to ensure governance throughout the system's life span. The governance model highlights the systemic nature of AI governance and opens new research avenues into its practical implementation, the mechanisms that connect different AI governance layers, and the dynamics between the AI governance actors. The model also offers a starting point for organizational decision-makers to consider the governance components needed to ensure social acceptability, mitigate risks, and realize the potential of AI.
Plan and Design
Designing the AI system, defining requirements, and planning development
Deployer
Entity that integrates and deploys the AI system for end users
Govern
Policies, processes, and accountability structures for AI risk management