This page is still being polished. If you have thoughts, please share them via the feedback form.
Data on this page is preliminary and may change. Please do not share or cite these figures publicly.
Practices governing how AI systems are designed and built, including research norms, development workflows, review processes, and engineering standards.
Also in Organisation
Reasoning
Requirement engineering formalizes safety and performance specifications during system design phase.
AI sustainability assessment
Before starting to build a software system with AI, the development team first needs to identify the right problem to solve and the corresponding user needs. Once the problem is found and the environment where the system will be situated fully explored, the development team needs to analyze whether the system and the users benefit from AI of if they are potentially degraded by AI [89]. It is essential to make sure that AI adds value to the design.
2.2.1 Risk AssessmentVerifiable ethical requirement
The development of AI systems needs to adhere to AI ethics principles which are generally abstract and domain agnostic. Ethical requirements need to be derived from the AI ethics principles to fit into a specific domain and system context [16, 49, 93, 118, 128]. Every ethical requirement specified in a requirements specification document should be put into a verifiable form (i.e., with acceptance criteria). This means that a person or machine can later check that the AI system meets the ethical requirements that are derived from AI ethics principles and grounded in users’ needs. Vague or unverifiable statements should be avoided [110]. If there is no way to determine whether the AI system meets a particular ethical requirement, then this ethical requirement should be revised or removed.
2.4.2 Design StandardsData Requirements throughout the Entire Lifecycle
Data requirements need to be listed explicitly and specified throughout the data lifecycle (i.e., collection, cleaning, preparation, validation, analysis, and termination), taking into account ethical principles and involved stakeholders (i.e., data providers, data engineers, data scientists, data consumers, data auditors). Data requirements can be managed through data requirements specification. The specification could include detailed requirements for each phase in the data lifecycle, such as data collection requirements including data sources and collection methods. Google has created a template for dataset requirements specification
2.4.2 Design StandardsEthical user story
In agile processes, ethical user stories [43, 93] can help the development team elicit ethical requirements for AI systems and implement AI ethics principles from the early stage of development. Ethical user stories are created to serve as items of the product backlog that is to be worked on by the development team in iterations (i.e., sprints). Card-based toolkits can be used to list questions related to AI ethics principles. The answers to those questions are integrated into ethical user stories to be included in sprint backlogs. The development team or users can write ethical user stories on cards or notes using a pre-defined template and assign them to different sprints based on priority
2.4.3 Development WorkflowsGovernance Patterns
The governance for RAI systems can be defined as the structures and processes that are employed to ensure that the development and use of AI systems meet AI ethics principles. According to the structure of Shneiderman [104], governance can be built at three levels: industry level, organization level, and team level.
2.1 Oversight & AccountabilityGovernance Patterns > Industry-level governance patterns
3.1 Legal & RegulatoryGovernance Patterns > Organization-level governance patterns
2.1 Oversight & AccountabilityGovernance Patterns > Team-level governance patterns
2.1.2 Roles & AccountabilityProcess Patterns
The process patterns are reusable methods and best practices that can be used by the development team during the development process.
2.4.2 Design StandardsProcess Patterns > Design
2.4 Engineering & DevelopmentResponsible AI Pattern Catalogue: A Collection of Best Practices for AI Governance and Engineering
Lu, Qinghua; Zhu, Liming; Xu, Xiwei; Whittle, Jon; Zowghi, Didar; Jacquet, Aurelie (2024)
Responsible Artificial Intelligence (RAI) is widely considered as one of the greatest scientific challenges of our time and is key to increase the adoption of Artificial Intelligence (AI). Recently, a number of AI ethics principles frameworks have been published. However, without further guidance on best practices, practitioners are left with nothing much beyond truisms. In addition, significant efforts have been placed at algorithm level rather than system level, mainly focusing on a subset of mathematics-amenable ethical principles, such as fairness. Nevertheless, ethical issues can arise at any step of the development lifecycle, cutting across many AI and non-AI components of systems beyond AI algorithms and models. To operationalize RAI from a system perspective, in this article, we present an RAI Pattern Catalogue based on the results of a multivocal literature review. Rather than staying at the principle or algorithm level, we focus on patterns that AI system stakeholders can undertake in practice to ensure that the developed AI systems are responsible throughout the entire governance and engineering lifecycle. The RAI Pattern Catalogue classifies the patterns into three groups: multi-level governance patterns, trustworthy process patterns, and RAI-by-design product patterns. These patterns provide systematic and actionable guidance for stakeholders to implement RAI. © 2024 Copyright held by the owner/author(s).
Plan and Design
Designing the AI system, defining requirements, and planning development
Developer
Entity that creates, trains, or modifies the AI system
Map
Identifying and documenting AI risks, contexts, and impacts