This page is still being polished. If you have thoughts, please share them via the feedback form.
Data on this page is preliminary and may change. Please do not share or cite these figures publicly.
Internal decision-making bodies, roles, authority structures, and accountability frameworks that establish who has power over AI-related decisions and how they are held responsible.
Also in Organisation
Organization-level stakeholders: — Management teams include individuals at the higher level of an organization who are responsible for establishing RAI governance structure in the organization and achieving RAI at the organization level. The management teams include board members, executives and (middle-level) managers for legal, compliance, privacy, security, risk, and sustainability. — Employees are individuals who are hired by an organization to perform work for the organization and expected to adhere to RAI principles in their work.
Reasoning
Defines organizational roles and responsibility assignments for RAI governance across management and employee levels.
Leadership commitment for RAI
The management teams need to understand the values, cost, and risk for adopting AI in an organization. Commitment needs to be made by the management team to build an RAI culture within an organization [104]. Leadership commitment is achieved by the management team dedicating their time and efforts on establishing ethics principles and governance structure (e.g., appointment of a chief RAI officer, RAI advisory boards) [108], as well as incorporating RAI into an organization’s values, vision, mission [105], board strategy planning, executives’ performance reviews [98], audit and risk committee’s scope [54], and ESG commitments. Leadership commitment enables organizational culture on RAI and visible sponsorship to build RAI capability.
2.1.1 Leadership OversightEthics committee
Organizations need to build capability incorporating multiple areas of expertise to address RAI issues. An AI ethics committee is an AI governance body that is established to develop standard processes for decision making, as well as to approve and monitor AI projects [19, 73]
2.1.9 OtherCode of ethics for RAI
AI may make wrong decisions or behave inappropriately (e.g., impact human lives or buy the wrong product). To guide AI-related activities in an organization, a code of ethics is a set of rules that employees should uphold when developing an AI system
2.1.3 Policies & ProceduresEthical risk assessment
Although there are increasing concerns on AI ethics, RAI regulation is still at a very early stage. To assess the ethical risks associated with AI systems, an organization needs to extend the existing IT risk framework or design a new one to cover AI ethics
2.2.1 Risk AssessmentStandardized reporting
Standardized reporting is essential to address the opaque black box issue of AI systems. Organizations should set up standardized processes and templates for informing the development process and product design of AI systems to different stakeholders (e.g., AI governors, users, consumers) [100]. RAI regulations may request such obligations to ensure the transparency and explainability of AI systems.
2.1.3 Policies & ProceduresRole-Level Accountability Contract
It is necessary that organizations have an appropriate approach to enable accountability throughout the entire lifecycle of AI systems. Role-level accountability can be established through formal contracts to define the boundary of responsibility and identify who should be held accountable when an AI system misbehaves
2.1.2 Roles & AccountabilityRAI software bill of materials
The RAI software bill of materials keeps a list of components used to create an AI software product, which can be used by AI solution procurers and consumers to check the supply chain details of each component of interest and make buying decisions [12]. The supply chain details should at least include component name, version, supplier, dependency relationship, author of software bill of materials data, and timestamp [86]. This provides traceability and transparency about components and allows AI solution procurers and consumers to easily check component information (e.g., supply chain details and context information) and track ethical issues.
2.4.3 Development WorkflowsEthics training
It is urgent that the employees of organizations begin to think through the potential implications of AI on their work and make ethical choices during the development and use of AI systems. Ethics training provides employees with knowledge on how to deal with ethical issues during development
2.4.4 Training & AwarenessGovernance Patterns
The governance for RAI systems can be defined as the structures and processes that are employed to ensure that the development and use of AI systems meet AI ethics principles. According to the structure of Shneiderman [104], governance can be built at three levels: industry level, organization level, and team level.
2.1 Oversight & AccountabilityGovernance Patterns > Industry-level governance patterns
3.1 Legal & RegulatoryGovernance Patterns > Team-level governance patterns
2.1.2 Roles & AccountabilityProcess Patterns
The process patterns are reusable methods and best practices that can be used by the development team during the development process.
2.4.2 Design StandardsProcess Patterns > Requirement Engineering
2.4 Engineering & DevelopmentProcess Patterns > Design
2.4 Engineering & DevelopmentResponsible AI Pattern Catalogue: A Collection of Best Practices for AI Governance and Engineering
Lu, Qinghua; Zhu, Liming; Xu, Xiwei; Whittle, Jon; Zowghi, Didar; Jacquet, Aurelie (2024)
Responsible Artificial Intelligence (RAI) is widely considered as one of the greatest scientific challenges of our time and is key to increase the adoption of Artificial Intelligence (AI). Recently, a number of AI ethics principles frameworks have been published. However, without further guidance on best practices, practitioners are left with nothing much beyond truisms. In addition, significant efforts have been placed at algorithm level rather than system level, mainly focusing on a subset of mathematics-amenable ethical principles, such as fairness. Nevertheless, ethical issues can arise at any step of the development lifecycle, cutting across many AI and non-AI components of systems beyond AI algorithms and models. To operationalize RAI from a system perspective, in this article, we present an RAI Pattern Catalogue based on the results of a multivocal literature review. Rather than staying at the principle or algorithm level, we focus on patterns that AI system stakeholders can undertake in practice to ensure that the developed AI systems are responsible throughout the entire governance and engineering lifecycle. The RAI Pattern Catalogue classifies the patterns into three groups: multi-level governance patterns, trustworthy process patterns, and RAI-by-design product patterns. These patterns provide systematic and actionable guidance for stakeholders to implement RAI. © 2024 Copyright held by the owner/author(s).
Other (outside lifecycle)
Outside the standard AI system lifecycle
Governance Actor
Regulator, standards body, or oversight entity shaping AI policy
Govern
Policies, processes, and accountability structures for AI risk management
Primary
6.5 Governance failure