This page is still being polished. If you have thoughts, please share them via the feedback form.
Data on this page is preliminary and may change. Please do not share or cite these figures publicly.
Cryptographic protections, access controls, and hardware security.
Also in Non-Model
Verifiable ethical credentials can be used as evidence of ethical compliance for AI systems, components, models, developers, operators,96 users, organizations, and development processes [21, 72, 92]. Verifiable credentials are data that could be cryptographically verified and be presented with strong proofs [23]. Publicly accessible data infrastructure needs to be built to support the generation and verification of the ethical credentials on a neutral platform. Before using AI systems, users may verify the systems’ ethical credential to check if the systems are compliant with AI ethics principles or regulations [21]. However, the users may be required to provide the ethical credentials to use and operate the AI systems (e.g., to ensure the flight safety of drones).
The verifiable ethical credential helps increase user trust toward an AI system through conferring the trust that the user has with the authority that issues the credential to AI systems, organizations that develop AI systems, and operators who operate AI systems. Such transitive trust relationship is critical to the efficient functioning of the AI system. With an ethical credential, an AI system could provide proof of compliance as an incentive for the users to use the AI system, thus increasing AI adoption. An ethical credential may be forged, which makes the verification of authenticity of the ethical credentials become challenging. Blockchain could be adopted to build the credential infrastructure to ensure data integrity. For example, SecureKey97 is a blockchain-based infrastructure for ID management with support of a verifiable credential.
Reasoning
Establishes credentialing standards enabling verified proof of ethical compliance through shared infrastructure.
Supply chain patterns
Governance Patterns
The governance for RAI systems can be defined as the structures and processes that are employed to ensure that the development and use of AI systems meet AI ethics principles. According to the structure of Shneiderman [104], governance can be built at three levels: industry level, organization level, and team level.
2.1 Oversight & AccountabilityGovernance Patterns > Industry-level governance patterns
3.1 Legal & RegulatoryGovernance Patterns > Organization-level governance patterns
2.1 Oversight & AccountabilityGovernance Patterns > Team-level governance patterns
2.1.2 Roles & AccountabilityProcess Patterns
The process patterns are reusable methods and best practices that can be used by the development team during the development process.
2.4.2 Design StandardsProcess Patterns > Requirement Engineering
2.4 Engineering & DevelopmentResponsible AI Pattern Catalogue: A Collection of Best Practices for AI Governance and Engineering
Lu, Qinghua; Zhu, Liming; Xu, Xiwei; Whittle, Jon; Zowghi, Didar; Jacquet, Aurelie (2024)
Responsible Artificial Intelligence (RAI) is widely considered as one of the greatest scientific challenges of our time and is key to increase the adoption of Artificial Intelligence (AI). Recently, a number of AI ethics principles frameworks have been published. However, without further guidance on best practices, practitioners are left with nothing much beyond truisms. In addition, significant efforts have been placed at algorithm level rather than system level, mainly focusing on a subset of mathematics-amenable ethical principles, such as fairness. Nevertheless, ethical issues can arise at any step of the development lifecycle, cutting across many AI and non-AI components of systems beyond AI algorithms and models. To operationalize RAI from a system perspective, in this article, we present an RAI Pattern Catalogue based on the results of a multivocal literature review. Rather than staying at the principle or algorithm level, we focus on patterns that AI system stakeholders can undertake in practice to ensure that the developed AI systems are responsible throughout the entire governance and engineering lifecycle. The RAI Pattern Catalogue classifies the patterns into three groups: multi-level governance patterns, trustworthy process patterns, and RAI-by-design product patterns. These patterns provide systematic and actionable guidance for stakeholders to implement RAI. © 2024 Copyright held by the owner/author(s).
Other (outside lifecycle)
Outside the standard AI system lifecycle
Governance Actor
Regulator, standards body, or oversight entity shaping AI policy
Govern
Policies, processes, and accountability structures for AI risk management