This page is still being polished. If you have thoughts, please share them via the feedback form.
Data on this page is preliminary and may change. Please do not share or cite these figures publicly.
Containment, isolation, and control mechanisms for system execution.
Also in Non-Model
Ethical sandbox can be applied to isolate an AI component from other AI components and non-AI components by running the AI component separately in a safe environment [63] (e.g., sandboxing the unverified visual perception component). Thus, the AI component could execute without affecting other components and the output of the AI system. The ethical sandbox is an emulated environment with no access to the rest of the AI system. An emulation environment duplicates all the hardware and software functionality of an AI system. Thus, developers could run an AI component safely to determine how it works and whether it is responsible before widely deploying the AI component. aximal tolerable probability of violating the ethical requirements should be defined as the ethical margin for the sandbox. A watchdog can be used to limit the execution time of the AI component to reduce the ethical risk (e.g., only activating the visual perception component for 5 minutes on the bridges built especially for autonomous vehicles).
Fastcase AI Sandbox110 provides a secure platform for users to upload dataset and do data analysis in a safe environment. AI Sandbox111 provides an AI execution and RESTful interface that could be used by modern programming languages
Reasoning
Isolates AI component in sandboxed emulation environment, preventing access to other systems during testing.
Governance Patterns
The governance for RAI systems can be defined as the structures and processes that are employed to ensure that the development and use of AI systems meet AI ethics principles. According to the structure of Shneiderman [104], governance can be built at three levels: industry level, organization level, and team level.
2.1 Oversight & AccountabilityGovernance Patterns > Industry-level governance patterns
3.1 Legal & RegulatoryGovernance Patterns > Organization-level governance patterns
2.1 Oversight & AccountabilityGovernance Patterns > Team-level governance patterns
2.1.2 Roles & AccountabilityProcess Patterns
The process patterns are reusable methods and best practices that can be used by the development team during the development process.
2.4.2 Design StandardsProcess Patterns > Requirement Engineering
2.4 Engineering & DevelopmentResponsible AI Pattern Catalogue: A Collection of Best Practices for AI Governance and Engineering
Lu, Qinghua; Zhu, Liming; Xu, Xiwei; Whittle, Jon; Zowghi, Didar; Jacquet, Aurelie (2024)
Responsible Artificial Intelligence (RAI) is widely considered as one of the greatest scientific challenges of our time and is key to increase the adoption of Artificial Intelligence (AI). Recently, a number of AI ethics principles frameworks have been published. However, without further guidance on best practices, practitioners are left with nothing much beyond truisms. In addition, significant efforts have been placed at algorithm level rather than system level, mainly focusing on a subset of mathematics-amenable ethical principles, such as fairness. Nevertheless, ethical issues can arise at any step of the development lifecycle, cutting across many AI and non-AI components of systems beyond AI algorithms and models. To operationalize RAI from a system perspective, in this article, we present an RAI Pattern Catalogue based on the results of a multivocal literature review. Rather than staying at the principle or algorithm level, we focus on patterns that AI system stakeholders can undertake in practice to ensure that the developed AI systems are responsible throughout the entire governance and engineering lifecycle. The RAI Pattern Catalogue classifies the patterns into three groups: multi-level governance patterns, trustworthy process patterns, and RAI-by-design product patterns. These patterns provide systematic and actionable guidance for stakeholders to implement RAI. © 2024 Copyright held by the owner/author(s).
Verify and Validate
Testing, evaluating, auditing, and red-teaming the AI system
Developer
Entity that creates, trains, or modifies the AI system
Measure
Quantifying, testing, and monitoring identified AI risks
Other