This page is still being polished. If you have thoughts, please share them via the feedback form.
Data on this page is preliminary and may change. Please do not share or cite these figures publicly.
Staged rollout strategies, phased deployment, and tiered access approaches for production systems.
Also in Operations & Security
There are various deployment strategies for AI systems. Phased deployment means deploying AI systems for a subset group of users initially to reduce ethical risk [47]. The new version of AI systems rolls out incrementally and serves alongside the old version. Phased deployment can also be about automating decisions in phases to better supervise and control automation. This usually depends on the stakes of the situations and the level of confidence that users may have with automatic decisions made by AI systems. Further, A/B testing deployment [58] is a common deployment strategy undertaken in industry, where different versions of the AI model are deployed to production. The models are compared and selected based on their ethical performance. In addition, the existing reliability practices, like redundancy, are also applicable to AI components in an AI system. Multiple AI models work independently to improve the ethical performance of the AI components. Applying various deployment strategies helps to reduce the ethical risk. Users can be quickly redirected to the older version or the other version of AI systems/models. However, it is complex and expensive to adopt different deployment strategies during operations.
AI systems may frequently evolve due to their data dependency. When ethical performance degradation occurs over time, AI models need to be retrained with new data or features and reintegrated into AI components. The non-AI component may also need to be upgraded to meet new requirements or changing context. New versions of AI systems need to be frequently and continuously deployed into production environments. However, AI systems involve a higher degree of uncertainty and risks associated with the autonomy of the AI systems. Thus, there is a strong desire for various deployment strategies to support continuous deployment [
Reasoning
Phased deployment strategy rolls out versions incrementally to production, comparing versions to reduce operational risk.
Operation
Governance Patterns
The governance for RAI systems can be defined as the structures and processes that are employed to ensure that the development and use of AI systems meet AI ethics principles. According to the structure of Shneiderman [104], governance can be built at three levels: industry level, organization level, and team level.
2.1 Oversight & AccountabilityGovernance Patterns > Industry-level governance patterns
3.1 Legal & RegulatoryGovernance Patterns > Organization-level governance patterns
2.1 Oversight & AccountabilityGovernance Patterns > Team-level governance patterns
2.1.2 Roles & AccountabilityProcess Patterns
The process patterns are reusable methods and best practices that can be used by the development team during the development process.
2.4.2 Design StandardsProcess Patterns > Requirement Engineering
2.4 Engineering & DevelopmentResponsible AI Pattern Catalogue: A Collection of Best Practices for AI Governance and Engineering
Lu, Qinghua; Zhu, Liming; Xu, Xiwei; Whittle, Jon; Zowghi, Didar; Jacquet, Aurelie (2024)
Responsible Artificial Intelligence (RAI) is widely considered as one of the greatest scientific challenges of our time and is key to increase the adoption of Artificial Intelligence (AI). Recently, a number of AI ethics principles frameworks have been published. However, without further guidance on best practices, practitioners are left with nothing much beyond truisms. In addition, significant efforts have been placed at algorithm level rather than system level, mainly focusing on a subset of mathematics-amenable ethical principles, such as fairness. Nevertheless, ethical issues can arise at any step of the development lifecycle, cutting across many AI and non-AI components of systems beyond AI algorithms and models. To operationalize RAI from a system perspective, in this article, we present an RAI Pattern Catalogue based on the results of a multivocal literature review. Rather than staying at the principle or algorithm level, we focus on patterns that AI system stakeholders can undertake in practice to ensure that the developed AI systems are responsible throughout the entire governance and engineering lifecycle. The RAI Pattern Catalogue classifies the patterns into three groups: multi-level governance patterns, trustworthy process patterns, and RAI-by-design product patterns. These patterns provide systematic and actionable guidance for stakeholders to implement RAI. © 2024 Copyright held by the owner/author(s).
Deploy
Releasing the AI system into a production environment
Deployer
Entity that integrates and deploys the AI system for end users
Manage
Prioritising, responding to, and mitigating AI risks
Other