This page is still being polished. If you have thoughts, please share them via the feedback form.
Data on this page is preliminary and may change. Please do not share or cite these figures publicly.
Definition of roles, teams, and responsibility assignments for AI governance.
Also in Oversight & Accountability
Team-level stakeholders: — Development teams include those who are responsible for developing and deploying AI systems, including product managers, project managers, team leaders, business analysts, architects, UX/UI designers, data scientists, developers, testers, and operators. The development teams are expected to implement RAI in their development process and embed RAI into the product design of AI systems.
Reasoning
Defines development team roles and responsibility assignments for implementing RAI across product design and deployment processes.
Customized agile process
To address ethical issues in the AI system development process, agile methods need to be extended and customized to allow consideration of ethics principles. Extension points could be artifacts, roles, ceremonies, practices, and culture.
2.4.3 Development WorkflowsTight Coupling of AI and Non-AI Development.
To bridge the methodological gap between AI and non-AI development, both the AI team and the non-AI team need to be clear about what exactly is being delivered by a project and share the same sprints and use a common co-versioning registry to track the progress [71]. The close coupling of AI and non-AI development results in improved trust within the project team and better communication on both system-level and model-level ethical requirements.
2.4.3 Development WorkflowsDiverse team
Building a diverse project team can effectively eliminate bias and improve diversity and inclusion in AI systems [31, 72, 131]. The diversity can be across gender, race, age, sexual orientation, expertise, and so on. A diverse team can drive creative thinking for greater innovation, but communication could become challenging due to different background and preference.64
2.4.4 Training & AwarenessStakeholder engagement
Stakeholder engagement allows AI systems to better reflect their stakeholders’ needs and expectations [17, 104, 129, 131]. There are various manners to engage stakeholders, such as interviews, online and offline meetings, project planning/review,participatory design workshops, and crowd sourcing. Stakeholders may help the project team identify potential ethical risks before they become threats, but there maybe conflicting opinions from different stakeholders.
2.2.1 Risk AssessmentContinuous Documentation Using Templates
The project teams need to create and continuously update documentations for the key artifacts of AI systems that may lead to ethical issues, such as data and models. Continuous documentation using templates helps track the evolution of artifacts and clarify the context in which AI systems are trustworthy
2.2.4 Assurance DocumentationFailure Mode and Effects Analysis
Failure Mode and Effects Analysis (FMEA) is a bottom-up risk assessment method that can be used to identify ethical risks and calculate their priorities at the beginning of the development process
2.2.1 Risk AssessmentFault Tree Analysis
Fault Tree Analysis (FTA) [30] can be used to describe how system-level ethical failures are led by small ethical failure events through an analytical graph (i.e., fault tree). The development team can easily capture how ethical failures propagate in the AI system. FTA can be done during the design or operation stage to anticipate the potential ethical risks and to recommend mitigation actions
2.2.1 Risk AssessmentVerifiable Claim for AI System Artifacts
. The potential users of AI systems need methods for assessing an AI system’s ethical properties and comparing the system to other systems. A verifiable claim platform can be built to support developers in making claims on ethical properties [40] and conducting the verification [124]. Such platform must consider the disparity of the stakeholder’s views. For example, developers might focus on reliability, whereas users might be interested in fairness. A verifiable claim is a statement about an AI system or an artifact (e.g., model or dataset) that is substantiated by a verification mechanism. The platform itself provides management capabilities such as claim creation and verification, access control, and dispute management
3.2.2 Technical StandardsGovernance Patterns
The governance for RAI systems can be defined as the structures and processes that are employed to ensure that the development and use of AI systems meet AI ethics principles. According to the structure of Shneiderman [104], governance can be built at three levels: industry level, organization level, and team level.
2.1 Oversight & AccountabilityGovernance Patterns > Industry-level governance patterns
3.1 Legal & RegulatoryGovernance Patterns > Organization-level governance patterns
2.1 Oversight & AccountabilityProcess Patterns
The process patterns are reusable methods and best practices that can be used by the development team during the development process.
2.4.2 Design StandardsProcess Patterns > Requirement Engineering
2.4 Engineering & DevelopmentProcess Patterns > Design
2.4 Engineering & DevelopmentResponsible AI Pattern Catalogue: A Collection of Best Practices for AI Governance and Engineering
Lu, Qinghua; Zhu, Liming; Xu, Xiwei; Whittle, Jon; Zowghi, Didar; Jacquet, Aurelie (2024)
Responsible Artificial Intelligence (RAI) is widely considered as one of the greatest scientific challenges of our time and is key to increase the adoption of Artificial Intelligence (AI). Recently, a number of AI ethics principles frameworks have been published. However, without further guidance on best practices, practitioners are left with nothing much beyond truisms. In addition, significant efforts have been placed at algorithm level rather than system level, mainly focusing on a subset of mathematics-amenable ethical principles, such as fairness. Nevertheless, ethical issues can arise at any step of the development lifecycle, cutting across many AI and non-AI components of systems beyond AI algorithms and models. To operationalize RAI from a system perspective, in this article, we present an RAI Pattern Catalogue based on the results of a multivocal literature review. Rather than staying at the principle or algorithm level, we focus on patterns that AI system stakeholders can undertake in practice to ensure that the developed AI systems are responsible throughout the entire governance and engineering lifecycle. The RAI Pattern Catalogue classifies the patterns into three groups: multi-level governance patterns, trustworthy process patterns, and RAI-by-design product patterns. These patterns provide systematic and actionable guidance for stakeholders to implement RAI. © 2024 Copyright held by the owner/author(s).
Other (outside lifecycle)
Outside the standard AI system lifecycle
Other (multiple actors)
Applies across multiple actor types
Govern
Policies, processes, and accountability structures for AI risk management
Primary
6.5 Governance failure