This page is still being polished. If you have thoughts, please share them via the feedback form.
Data on this page is preliminary and may change. Please do not share or cite these figures publicly.
Containment, isolation, and control mechanisms for system execution.
Also in Non-Model
Reasoning
Insufficient information. No description or evidence provided to identify focal activity or occurrence location.
AI mode switcher
Adding an AI mode switcher to the AI system offers users efficient invocation and dismissal mechanisms for activating and deactivating the AI component whenever needed, thus defer the architectural decision to the execution time that is decided by the end user or the operator of the AI system. The AI mode switcher is like a kill switch of an AI system that could immediately shut down the AI component and thus stop its negative effects [78, 89, 116] (e.g., turning off the automated driving system and disconnecting it from the internet). The decisions made by the AI component can be executed automatically or reviewed by a human expert before being executed in critical situations. The human expert serves to approve or override the decisions (e.g., skipping the path generated by the navigation system). Human intervention can also happen after acting on the AI decision through the fallback mechanism that reverses the system back to the state before executing the AI decision. A built-in guard can be used to ensure that the AI component is only activated within the pre-defined conditions (e.g., domain of use, boundaries of competence). The end users or the operators can ask questions or report complaints/failures/near misses through a recourse channel after observing a bad decision from the AI component.
1.2.2 Runtime EnvironmentMulti-Model Decision Maker
The multi-model decision maker employs different models to perform the same task or enable a single decision (e.g., deploying different algorithms for visual perception). It improves the reliability by deploying different models under different contexts (e.g., different geo-location regions) and enabling fault tolerance by cross validating ethical requirements for a single decision [24, 84]. Different consensus protocols could be defined to make the final decision—for example, taking the majority decision. Another strategy is to only accept the same results from the employed models. In addition, the end user or the operator could step in to review the output from the multiple models and make a final decision based on human expertise
1.1.4 Model ArchitectureHomogeneous Redundancy
N-version programming is a software design pattern to ensure fault tolerance of software [61]. Similarly, deploying multiple redundant and identical AI components (e.g., two brake control components) can be a solution to tolerate the individual AI component with high uncertainty that may make unethical decisions or the individual adversary hardware component that produces malicious data or behaves unethically [84]. A cross check can be conducted for the outputs provided by multiple components of a single type. The results are accepted only as there is a consensus among the redundant components. The results that are not accepted automatically according to a consensus protocol can be further reviewed by the end user or the operator of the AI system
1.2.2 Runtime EnvironmentGovernance Patterns
The governance for RAI systems can be defined as the structures and processes that are employed to ensure that the development and use of AI systems meet AI ethics principles. According to the structure of Shneiderman [104], governance can be built at three levels: industry level, organization level, and team level.
2.1 Oversight & AccountabilityGovernance Patterns > Industry-level governance patterns
3.1 Legal & RegulatoryGovernance Patterns > Organization-level governance patterns
2.1 Oversight & AccountabilityGovernance Patterns > Team-level governance patterns
2.1.2 Roles & AccountabilityProcess Patterns
The process patterns are reusable methods and best practices that can be used by the development team during the development process.
2.4.2 Design StandardsProcess Patterns > Requirement Engineering
2.4 Engineering & DevelopmentResponsible AI Pattern Catalogue: A Collection of Best Practices for AI Governance and Engineering
Lu, Qinghua; Zhu, Liming; Xu, Xiwei; Whittle, Jon; Zowghi, Didar; Jacquet, Aurelie (2024)
Responsible Artificial Intelligence (RAI) is widely considered as one of the greatest scientific challenges of our time and is key to increase the adoption of Artificial Intelligence (AI). Recently, a number of AI ethics principles frameworks have been published. However, without further guidance on best practices, practitioners are left with nothing much beyond truisms. In addition, significant efforts have been placed at algorithm level rather than system level, mainly focusing on a subset of mathematics-amenable ethical principles, such as fairness. Nevertheless, ethical issues can arise at any step of the development lifecycle, cutting across many AI and non-AI components of systems beyond AI algorithms and models. To operationalize RAI from a system perspective, in this article, we present an RAI Pattern Catalogue based on the results of a multivocal literature review. Rather than staying at the principle or algorithm level, we focus on patterns that AI system stakeholders can undertake in practice to ensure that the developed AI systems are responsible throughout the entire governance and engineering lifecycle. The RAI Pattern Catalogue classifies the patterns into three groups: multi-level governance patterns, trustworthy process patterns, and RAI-by-design product patterns. These patterns provide systematic and actionable guidance for stakeholders to implement RAI. © 2024 Copyright held by the owner/author(s).
Plan and Design
Designing the AI system, defining requirements, and planning development
Developer
Entity that creates, trains, or modifies the AI system
Manage
Prioritising, responding to, and mitigating AI risks
Other