This page is still being polished. If you have thoughts, please share them via the feedback form.
Data on this page is preliminary and may change. Please do not share or cite these figures publicly.
Practices governing how AI systems are designed and built, including research norms, development workflows, review processes, and engineering standards.
Also in Organisation
Reasoning
Insufficient detail to identify focal activity or determine focal location. "Design" requires specificity to distinguish between technical, organizational, or ecosystem implementations.
Multi-level co-architecturing
Multi-level co-architecting is required to ensure the seamless integration of different components, including co-architecting AI components and non-AI components and co-architecting of different AI model pipeline components. Multilevel co-architecting allows both system- and model-level requirements to be considered in design decision making.
2.4.2 Design StandardsEnvisioning card
Envisioning cards [17, 115] are designed to help the development team operationalize human values during design processes of AI systems. The design of envisioning cards is based on four envisioning criteria, including stakeholder, time, value, and pervasiveness. The stakeholder criterion helps the development team takes into account the effects of an AI system on both direct stakeholders and indirect stakeholders. The time criterion emphasizes the long-term implication of AI systems on human, society, and environments. The value criterion guides the development team to consider the impact of AI systems on human values. The pervasiveness criterion discusses the challenges encountered if an AI system is widely adopted in terms of geography, culture, demographics, and so on.
2.4.2 Design StandardsDesign modeling for ethics
Design modeling methods can be extended and used to support the modeling of AI components and the ethical aspects, including using UML to describe the architecture of AI systems and represent their ethical aspects [114], designing formal models taking into account human values [36], using ontologies to model the AI system artifacts for accountability [10, 85], establishing RAI knowledge bases for making design decisions considering ethical concerns [101], and using logic programming to implement ethical principles
2.4.2 Design StandardsSystem-Level Ethical Simulation
System-level simulation (e.g., [29, 30, 96, 106]) is a cost-effective way to imitate real-world situations and assess the behaviors of AI systems before deploying AI systems in the real world. A simulation model needs to be built to mimic the possible behaviors and decisions of the AI system and assess the ethical impacts. The assessment results can be sent to the development team or potential users before the AI systems are deployed in the real world. System-level simulation can predict potential ethical risks and avoid serious ethical disasters before deploying AI systems in the real world.
2.2.2 Testing & EvaluationHuman-Centered Interface Design for Explainable Artificial Intelligence
Explainable Artificial Intelligence (XAI) can be viewed as a human-AI interaction problem and achieved by effective human-centered interface design. Checklists (e.g., a question bank) are often used to help design the explainable user interfaces [62, 66, 67] and understand the user needs, choices of XAI techniques, and XAI design factors [67]. For example, the checklist questions could consider the following aspects [66] for different stakeholders: input, output, how, performance (can be extended to ethical performance), why and why not, what if, and so on.
2.4.2 Design StandardsGovernance Patterns
The governance for RAI systems can be defined as the structures and processes that are employed to ensure that the development and use of AI systems meet AI ethics principles. According to the structure of Shneiderman [104], governance can be built at three levels: industry level, organization level, and team level.
2.1 Oversight & AccountabilityGovernance Patterns > Industry-level governance patterns
3.1 Legal & RegulatoryGovernance Patterns > Organization-level governance patterns
2.1 Oversight & AccountabilityGovernance Patterns > Team-level governance patterns
2.1.2 Roles & AccountabilityProcess Patterns
The process patterns are reusable methods and best practices that can be used by the development team during the development process.
2.4.2 Design StandardsProcess Patterns > Requirement Engineering
2.4 Engineering & DevelopmentResponsible AI Pattern Catalogue: A Collection of Best Practices for AI Governance and Engineering
Lu, Qinghua; Zhu, Liming; Xu, Xiwei; Whittle, Jon; Zowghi, Didar; Jacquet, Aurelie (2024)
Responsible Artificial Intelligence (RAI) is widely considered as one of the greatest scientific challenges of our time and is key to increase the adoption of Artificial Intelligence (AI). Recently, a number of AI ethics principles frameworks have been published. However, without further guidance on best practices, practitioners are left with nothing much beyond truisms. In addition, significant efforts have been placed at algorithm level rather than system level, mainly focusing on a subset of mathematics-amenable ethical principles, such as fairness. Nevertheless, ethical issues can arise at any step of the development lifecycle, cutting across many AI and non-AI components of systems beyond AI algorithms and models. To operationalize RAI from a system perspective, in this article, we present an RAI Pattern Catalogue based on the results of a multivocal literature review. Rather than staying at the principle or algorithm level, we focus on patterns that AI system stakeholders can undertake in practice to ensure that the developed AI systems are responsible throughout the entire governance and engineering lifecycle. The RAI Pattern Catalogue classifies the patterns into three groups: multi-level governance patterns, trustworthy process patterns, and RAI-by-design product patterns. These patterns provide systematic and actionable guidance for stakeholders to implement RAI. © 2024 Copyright held by the owner/author(s).
Plan and Design
Designing the AI system, defining requirements, and planning development
Developer
Entity that creates, trains, or modifies the AI system
Unable to classify
Could not be classified to a specific AIRM function