This page is still being polished. If you have thoughts, please share them via the feedback form.
Data on this page is preliminary and may change. Please do not share or cite these figures publicly.
Runtime monitoring, observability, performance tracking, and anomaly detection in production.
Also in Operations & Security
Reasoning
Insufficient detail provided; name alone cannot confidently identify mechanism or category.
Continuous ethical validator
quire continual learning based on new data collected during operation of the AI system. The continuous ethical validator deployed in an AI system continuously monitors and validates the outcomes of AI components (e.g., the path recommended by the navigation system) against the ethical requirements [58, 111]. The outcomes of AI systems are about whether the AI system provides the intended benefits and behaves appropriately given the situation. The time and frequency of validation can be configured. Version-based feedback and rebuild alert are sent when the pre-defined conditions regarding the ethical requirement are met.
1.2.3 Monitoring & DetectionEthical sandbox
Ethical sandbox can be applied to isolate an AI component from other AI components and non-AI components by running the AI component separately in a safe environment [63] (e.g., sandboxing the unverified visual perception component). Thus, the AI component could execute without affecting other components and the output of the AI system. The ethical sandbox is an emulated environment with no access to the rest of the AI system. An emulation environment duplicates all the hardware and software functionality of an AI system. Thus, developers could run an AI component safely to determine how it works and whether it is responsible before widely deploying the AI component. aximal tolerable probability of violating the ethical requirements should be defined as the ethical margin for the sandbox. A watchdog can be used to limit the execution time of the AI component to reduce the ethical risk (e.g., only activating the visual perception component for 5 minutes on the bridges built especially for autonomous vehicles).
1.2.2 Runtime EnvironmentEthical Knowledge Base
An ethical knowledge base, such as a knowledge graph, makes meaningful entities and concepts, and their relationships in design, implementation, deployment, and operation of AI systems [32, 85, 101]. With the ethical knowledge base, the rich semantic relationships between entities are explicit and traceable across heterogeneous high-level documents on one hand and different artifacts across the AI system lifecycle on the other hand. Thus, ethical requirements of the AI system can be systematically accessed and analyzed using the ethical knowledge base
3.2.3 Research ResourcesEthical digital twin
Before running an AI system in the real world, it is important to perform system-level simulation through an ethical digital twin running on a simulation infrastructure to understand the behaviors of the AI system and assess ethical risks in a cost-effective way. Digital twin [102] was introduced by NASA as a digital representation of a real system used in lab-testing activities. The digital twin of an AI system could be used to represent the behaviors of the AI system and forecast change impacts. The ethical digital twin can also be used during operation of the AI system to assess the system’s runtime behaviors and decisions based on the simulation model using the real-time data. The assessment results can be sent back to alert the system or user before the unethical behavior or decision takes effect
2.2.2 Testing & EvaluationIncentive registry
centive mechanisms are effective treatments in motivating AI systems and encouraging the stakeholders involved in the AI system ecosystem to execute tasks in a responsible manner. An incentive registry records the rewards that correspond to the AI system’s ethical behavior and outcome of decisions [121, 125] (e.g., rewards for path planning without ethical risks). There are various ways to formulate the incentive mechanism, such as using reinforcement learning or building the incentive mechanism on a publicly accessible data infrastructure like blockchain [125]. Traditional incentive mechanisms for human participants include reputation based and payment based.
1.2.9 OtherEthical black box
The purpose of embedding an ethical black box in an AI system is to investigate why and how an AI system caused an accident or a near miss. The ethical black box continuously records sensor data, internal status data, decisions, behaviors (both system and operator), and effects [33, 34, 122]. For example, an ethical black box could be built into an automated driving system to record the behaviors of the system and driver and their effects [34]. All of these data need to be kept as evidence with the timestamp and location data.
1.2.3 Monitoring & DetectionGlobal view auditor
The global-view auditor is a component that collects information from multiple AI components/AI systems and processes the information to identify discrepancies among the information collected [79]. Based on the result, the global-view auditor may alert the AI system/component to a wrong perception, thus avoiding negative impacts or identifing liability when negative events occur. This pattern can be also used to improve the decision making of an AI system by taking the knowledge from other systems. For example, an autonomous vehicle may increase its visibility using the perceptions of others to make better decisions at runtime. The global-view auditor enables accountability that covers different perceptions of AI components/systems that are involved and redresses the conflicting information collected from multiple AI components/systems.
1.2.3 Monitoring & DetectionGovernance Patterns
The governance for RAI systems can be defined as the structures and processes that are employed to ensure that the development and use of AI systems meet AI ethics principles. According to the structure of Shneiderman [104], governance can be built at three levels: industry level, organization level, and team level.
2.1 Oversight & AccountabilityGovernance Patterns > Industry-level governance patterns
3.1 Legal & RegulatoryGovernance Patterns > Organization-level governance patterns
2.1 Oversight & AccountabilityGovernance Patterns > Team-level governance patterns
2.1.2 Roles & AccountabilityProcess Patterns
The process patterns are reusable methods and best practices that can be used by the development team during the development process.
2.4.2 Design StandardsProcess Patterns > Requirement Engineering
2.4 Engineering & DevelopmentResponsible AI Pattern Catalogue: A Collection of Best Practices for AI Governance and Engineering
Lu, Qinghua; Zhu, Liming; Xu, Xiwei; Whittle, Jon; Zowghi, Didar; Jacquet, Aurelie (2024)
Responsible Artificial Intelligence (RAI) is widely considered as one of the greatest scientific challenges of our time and is key to increase the adoption of Artificial Intelligence (AI). Recently, a number of AI ethics principles frameworks have been published. However, without further guidance on best practices, practitioners are left with nothing much beyond truisms. In addition, significant efforts have been placed at algorithm level rather than system level, mainly focusing on a subset of mathematics-amenable ethical principles, such as fairness. Nevertheless, ethical issues can arise at any step of the development lifecycle, cutting across many AI and non-AI components of systems beyond AI algorithms and models. To operationalize RAI from a system perspective, in this article, we present an RAI Pattern Catalogue based on the results of a multivocal literature review. Rather than staying at the principle or algorithm level, we focus on patterns that AI system stakeholders can undertake in practice to ensure that the developed AI systems are responsible throughout the entire governance and engineering lifecycle. The RAI Pattern Catalogue classifies the patterns into three groups: multi-level governance patterns, trustworthy process patterns, and RAI-by-design product patterns. These patterns provide systematic and actionable guidance for stakeholders to implement RAI. © 2024 Copyright held by the owner/author(s).
Operate and Monitor
Running, maintaining, and monitoring the AI system post-deployment
Deployer
Entity that integrates and deploys the AI system for end users
Measure
Quantifying, testing, and monitoring identified AI risks
Other