This page is still being polished. If you have thoughts, please share them via the feedback form.
Data on this page is preliminary and may change. Please do not share or cite these figures publicly.
Practices for assessing AI systems, including testing, red teaming, risk assessment, auditing, and compliance verification.
Also in Organisation
Reasoning
Mitigation name "Testing" lacks description and evidence; cannot distinguish between organizational assessment, technical monitoring, or ecosystem frameworks.
Ethical Acceptance Testing
Ethical acceptance testing (e.g., bias testing) is designed to detect the ethics-related design flaws and verify the ethical requirements (e.g., whether the data pipeline has appropriate privacy control, fairness testing for training/validation data) [3, 20, 123]. In an agile process, the ethical requirements can be framed as ethical user stories and associated with ethical acceptance tests. The ethical acceptance tests are a contract between the customer and development team. The behavior of the AI system should be quantified by the acceptance tests, and the acceptance criteria for each of the ethical principles should be defined in a testable way. The history of ethical acceptance testing should be recorded and tracked, such as how and by whom the ethical issues were fixed. A testing leader may be appointed to lead the ethical acceptance testing for each ethics principle. For example, when bias is detected at runtime, the monitoring reports are returned to the bias testing leader [28, 104]. Ethical acceptance tests capture the ethical requirements and measure how well the AI system meets ethical requirements, but they may need to be amended frequently as ethical requirements change.
2.2.2 Testing & EvaluationEthical Assessment for Test Cases
A collection of test cases with expected results should be generated [83] and maintained to detect possible ethical failures in a variety of extreme situations [42]. However, there might be ethical issues within the test cases. For example, the test data may introduce fairness or privacy issues [77]. Preparing quality test cases is an integral part of ethical acceptance testing. A test case usually is composed of the ID, description, preconditions, test steps, test data, expected results, actual results, status, creator name, creation date, executor name, and execution date. All the test cases for verification and validation should pass the ethics assessment. This includes ethical risk assessment for test steps and test data. The creation and execution information are essential to track the accountability of ethical issues with test cases. Ethical assessment for test cases improves the ethical quality of the development process of AI systems, but new test cases need to be continually added and assessed when a new ethical requirement is added or the operation context changes.
2.2.2 Testing & EvaluationGovernance Patterns
The governance for RAI systems can be defined as the structures and processes that are employed to ensure that the development and use of AI systems meet AI ethics principles. According to the structure of Shneiderman [104], governance can be built at three levels: industry level, organization level, and team level.
2.1 Oversight & AccountabilityGovernance Patterns > Industry-level governance patterns
3.1 Legal & RegulatoryGovernance Patterns > Organization-level governance patterns
2.1 Oversight & AccountabilityGovernance Patterns > Team-level governance patterns
2.1.2 Roles & AccountabilityProcess Patterns
The process patterns are reusable methods and best practices that can be used by the development team during the development process.
2.4.2 Design StandardsProcess Patterns > Requirement Engineering
2.4 Engineering & DevelopmentResponsible AI Pattern Catalogue: A Collection of Best Practices for AI Governance and Engineering
Lu, Qinghua; Zhu, Liming; Xu, Xiwei; Whittle, Jon; Zowghi, Didar; Jacquet, Aurelie (2024)
Responsible Artificial Intelligence (RAI) is widely considered as one of the greatest scientific challenges of our time and is key to increase the adoption of Artificial Intelligence (AI). Recently, a number of AI ethics principles frameworks have been published. However, without further guidance on best practices, practitioners are left with nothing much beyond truisms. In addition, significant efforts have been placed at algorithm level rather than system level, mainly focusing on a subset of mathematics-amenable ethical principles, such as fairness. Nevertheless, ethical issues can arise at any step of the development lifecycle, cutting across many AI and non-AI components of systems beyond AI algorithms and models. To operationalize RAI from a system perspective, in this article, we present an RAI Pattern Catalogue based on the results of a multivocal literature review. Rather than staying at the principle or algorithm level, we focus on patterns that AI system stakeholders can undertake in practice to ensure that the developed AI systems are responsible throughout the entire governance and engineering lifecycle. The RAI Pattern Catalogue classifies the patterns into three groups: multi-level governance patterns, trustworthy process patterns, and RAI-by-design product patterns. These patterns provide systematic and actionable guidance for stakeholders to implement RAI. © 2024 Copyright held by the owner/author(s).
Deploy
Releasing the AI system into a production environment
Governance Actor
Regulator, standards body, or oversight entity shaping AI policy
Measure
Quantifying, testing, and monitoring identified AI risks
Other