This page is still being polished. If you have thoughts, please share them via the feedback form.
Data on this page is preliminary and may change. Please do not share or cite these figures publicly.
Non-Model mitigations not clearly fitting above categories.
Also in Non-Model
Reasoning
Supply chain security controls are secure development practices governing how systems are built and sourced.
Bill of Materials Registry
. The bill of materials registry [12, 88] can be designed to keep a formal machine-readable record of the supply chain details of the components used in building an AI system, including component name, version, supplier, dependency relationship, author, and timestamp. In addition to supply chain details of the components, context documents (like model cards [81] for reporting AI models, and datasheets for the datasets [41] used to train AI models) can be integrated to the bill of materials registry.
1.2.9 OtherVerifiable ethical credential
Verifiable ethical credentials can be used as evidence of ethical compliance for AI systems, components, models, developers, operators,96 users, organizations, and development processes [21, 72, 92]. Verifiable credentials are data that could be cryptographically verified and be presented with strong proofs [23]. Publicly accessible data infrastructure needs to be built to support the generation and verification of the ethical credentials on a neutral platform. Before using AI systems, users may verify the systems’ ethical credential to check if the systems are compliant with AI ethics principles or regulations [21]. However, the users may be required to provide the ethical credentials to use and operate the AI systems (e.g., to ensure the flight safety of drones).
1.2.4 Security InfrastructureCo-versioning registry
Co-versioning of the components or AI artifacts of AI systems provides end-to-end provenance guarantees across the entire lifecycle of AI systems. The co-versioning registry can track the co-evolution of components or AI artifacts [65, 70]. There are different levels of co-versioning: co-versioning of AI components and non-AI components, co-versioning of the artifacts within the AI components (i.e., co-versioning of data, model, code, and configurations, and co-versioning of local models and global models in federated learning). Co-versioning enables effective maintenance and evolution of AI components because the deployed model or code can be traced to the exact set of artifacts, parameters, and metadata that were used to develop the model and code
2.4.3 Development WorkflowsFederated learner
The federated learner trains an AI model across multiple edge devices or servers with local data samples. The federated learner [15, 18, 68–70, 112, 113, 117] preserves the data privacy by training models locally on the client devices and formulating a global model on a central server based on the local model updates (e.g., train the visual perception model locally in each vehicle). Decentralized learning is a variant of federated learning, which could use blockchain to remove the single point of failure and coordinate the learning process in a fully decentralized way [120].
1.1.4 Model ArchitectureGovernance Patterns
The governance for RAI systems can be defined as the structures and processes that are employed to ensure that the development and use of AI systems meet AI ethics principles. According to the structure of Shneiderman [104], governance can be built at three levels: industry level, organization level, and team level.
2.1 Oversight & AccountabilityGovernance Patterns > Industry-level governance patterns
3.1 Legal & RegulatoryGovernance Patterns > Organization-level governance patterns
2.1 Oversight & AccountabilityGovernance Patterns > Team-level governance patterns
2.1.2 Roles & AccountabilityProcess Patterns
The process patterns are reusable methods and best practices that can be used by the development team during the development process.
2.4.2 Design StandardsProcess Patterns > Requirement Engineering
2.4 Engineering & DevelopmentResponsible AI Pattern Catalogue: A Collection of Best Practices for AI Governance and Engineering
Lu, Qinghua; Zhu, Liming; Xu, Xiwei; Whittle, Jon; Zowghi, Didar; Jacquet, Aurelie (2024)
Responsible Artificial Intelligence (RAI) is widely considered as one of the greatest scientific challenges of our time and is key to increase the adoption of Artificial Intelligence (AI). Recently, a number of AI ethics principles frameworks have been published. However, without further guidance on best practices, practitioners are left with nothing much beyond truisms. In addition, significant efforts have been placed at algorithm level rather than system level, mainly focusing on a subset of mathematics-amenable ethical principles, such as fairness. Nevertheless, ethical issues can arise at any step of the development lifecycle, cutting across many AI and non-AI components of systems beyond AI algorithms and models. To operationalize RAI from a system perspective, in this article, we present an RAI Pattern Catalogue based on the results of a multivocal literature review. Rather than staying at the principle or algorithm level, we focus on patterns that AI system stakeholders can undertake in practice to ensure that the developed AI systems are responsible throughout the entire governance and engineering lifecycle. The RAI Pattern Catalogue classifies the patterns into three groups: multi-level governance patterns, trustworthy process patterns, and RAI-by-design product patterns. These patterns provide systematic and actionable guidance for stakeholders to implement RAI. © 2024 Copyright held by the owner/author(s).
Other (stage not listed)
Applies to a lifecycle stage not captured by the standard categories
Developer
Entity that creates, trains, or modifies the AI system
Govern
Policies, processes, and accountability structures for AI risk management