This page is still being polished. If you have thoughts, please share them via the feedback form.
Data on this page is preliminary and may change. Please do not share or cite these figures publicly.
Laws, mandates, and enforcement mechanisms requiring state authority to create or enforce.
Also in Ecosystem
Industry-level stakeholders: — AI technology producers develop AI technologies for others to build on top to produce AI solutions (e.g., parts of Google, Microsoft, IBM). AI technology producers may embed RAI in their technologies and/or provide additional RAI tools. — AI technology procurers procure AI technologies to build their in-house AI solutions (e.g., companies or government agencies buying/using AI platform/tools). AI technology procurers may care about RAI issues and embed RAI into their AI technology procurement process. — AI solution producers develop in-house/blended unique solutions on top of technology solutions and need to make sure the solutions adhere to RAI principles/standards/regulations (e.g., parts of MS/Google providing Office/Gmail “solutions”). They may offer the solutions to AI consumers directly or sell to others. They may use RAI tools (provided by AI technology producers or RAI tool producers) and RAI processes during their solution development. — AI solution procurers procure complete AI solutions (with some further configuration and instantiation) to use internally or offer to external AI consumers (e.g., a government agency buying from a complete solution from vendors). They may care about RAI issues and embed RAI into their AI solution procurement process. — AI users use an AI solution to make decisions that may impact on a subject (e.g., a loan officer or a government employee). AI users may exercise additional RAI oversight as the human-in-the-loop. — AI-impacted subjects are impacted by some AI-human dyad decisions (e.g., a loan applicant or a taxpayer). AI impacted subjects may care about RAI issues and contest the decision on dyad AI grounds. — AI consumers consume AI solutions (e.g., voice assistants, search engines, recommender engines) for their personal use (not affecting third parties). AI consumers may care about RAI issues and the dyad AI aspects of AI solutions. — RAI governors are those who set and enable RAI policies and controls within their culture. RAI governors could be functions within an organization in the preceding list or external (regulators, consumer advocacy groups, community). — RAI tool producers are technology vendors and dedicated companies offering RAI features integrated into AI platforms or AIOps/MLOps tools. — RAI tool procurers include any of the preceding stakeholders who may purchase or use RAI tools to improve or check solutions/technology’s RAI aspects
Reasoning
Describes cross-stakeholder coordination framework defining roles, responsibilities, and information-sharing patterns across AI industry value chain.
RAI Regulation
RAI Regulation. Laws already apply to AI systems; however, the processes/requirements to ensure compliance are not always certain, and also some regulations may need to be updated (e.g., administrative law). There is an urgent need for clear guidance to ensure that AI systems are developed and used responsibly in compliance with existing and upcoming laws (e.g., discrimination law). RAI regulations are developed by governments in their jurisdiction to enable the trustworthy development of AI systems by industry
3.1.1 Legislation & PolicyRegulatory Sandbox
To enable the trial of the innovative AI products in the market, a regulatory sandbox can be designed to allow testing the innovative AI products in the real world under relaxed regulatory requirements but with appropriate safeguards in place on a time-limited and small-scale basis [98].
3.1.1 Legislation & PolicyBuilding Code
AI systems may have various degrees of risk depending on the design and application domains. To ensure that AI systems are trustworthy and meet certain minimum standards, building code can be designed to provide mandatory regulatory rules for authority parties (e.g., an independent oversight and advisory committee) to assess the compliance of AI systems before they are allowed to launch [104
3.1.4 Compliance RequirementsRAI standard
An AI system may use data or components from multiple jurisdictions that may have conflicting regulatory requirements on their usage. To enable interoperability between jurisdictions, RAI standards are developed to describe repeatable processes to develop and use AI systems responsibly that are recognized internationally and can be either mandated by law or by contract
3.2.2 Technical StandardsRAI maturity model
Organizations can face challenges that can hurt their business if they are not aware of their RAI maturity. The RAI maturity model can be used to assess an organization’s RAI capabilities and the degree of readiness to take advantage of AI based on a set of dimensions [5, 38, 104, 126]. The RAI maturity model can guide organizations on how to increase their RAI capabilities. The assessment results depend on the model quality, such as assessment dimensions and rating methods
2.2.1 Risk AssessmentRAI certification
AI is a high-stake technology that requires evidence to prove AI products’ compliance with AI standards or regulations to operate in society. RAI certification can be designed to recognize that an organization or a person has the ability to develop or use an AI system in a way that is compliant with standards or regulations
2.2.3 Auditing & ComplianceTrust mark
Consumers in the market usually do not have professional knowledge about AI. To improve public confidence on AI and dispel their ethical concerns, the trust mark, a seal of endorsement, is easy to understand by all consumers and can be used to inform consumers about the AI system.
3.2.2 Technical StandardsIndependent oversight
Decisions made by AI systems may lead to severe failures due to its autonomous decision-making process. To audit AI systems and investigate failures in a trusted way, independent oversight can be conducted by independent oversight boards that consist of experts who are knowledgeable to perform the review and have no conflict of interest with the reviewed organizations
2.2.3 Auditing & ComplianceGovernance Patterns
The governance for RAI systems can be defined as the structures and processes that are employed to ensure that the development and use of AI systems meet AI ethics principles. According to the structure of Shneiderman [104], governance can be built at three levels: industry level, organization level, and team level.
2.1 Oversight & AccountabilityGovernance Patterns > Organization-level governance patterns
2.1 Oversight & AccountabilityGovernance Patterns > Team-level governance patterns
2.1.2 Roles & AccountabilityProcess Patterns
The process patterns are reusable methods and best practices that can be used by the development team during the development process.
2.4.2 Design StandardsProcess Patterns > Requirement Engineering
2.4 Engineering & DevelopmentProcess Patterns > Design
2.4 Engineering & DevelopmentResponsible AI Pattern Catalogue: A Collection of Best Practices for AI Governance and Engineering
Lu, Qinghua; Zhu, Liming; Xu, Xiwei; Whittle, Jon; Zowghi, Didar; Jacquet, Aurelie (2024)
Responsible Artificial Intelligence (RAI) is widely considered as one of the greatest scientific challenges of our time and is key to increase the adoption of Artificial Intelligence (AI). Recently, a number of AI ethics principles frameworks have been published. However, without further guidance on best practices, practitioners are left with nothing much beyond truisms. In addition, significant efforts have been placed at algorithm level rather than system level, mainly focusing on a subset of mathematics-amenable ethical principles, such as fairness. Nevertheless, ethical issues can arise at any step of the development lifecycle, cutting across many AI and non-AI components of systems beyond AI algorithms and models. To operationalize RAI from a system perspective, in this article, we present an RAI Pattern Catalogue based on the results of a multivocal literature review. Rather than staying at the principle or algorithm level, we focus on patterns that AI system stakeholders can undertake in practice to ensure that the developed AI systems are responsible throughout the entire governance and engineering lifecycle. The RAI Pattern Catalogue classifies the patterns into three groups: multi-level governance patterns, trustworthy process patterns, and RAI-by-design product patterns. These patterns provide systematic and actionable guidance for stakeholders to implement RAI. © 2024 Copyright held by the owner/author(s).
Other (outside lifecycle)
Outside the standard AI system lifecycle
Governance Actor
Regulator, standards body, or oversight entity shaping AI policy
Govern
Policies, processes, and accountability structures for AI risk management
Primary
6.5 Governance failure