This page is still being polished. If you have thoughts, please share them via the feedback form.
Data on this page is preliminary and may change. Please do not share or cite these figures publicly.
Practices for assessing AI systems, including testing, red teaming, risk assessment, auditing, and compliance verification.
Also in Organisation
The third theme identified the necessary internal practices that companies must adopt and further develop in order to audit their systems as well as to comply with the AI Act.
Reasoning
Mitigation name lacks concrete focal activity; placeholder text prevents mechanism identification.
Documentation Practices
Leveraging established documentation practices such as “model cards” and “datasheets” was a recurring topic throughout the interviews (Candidates 4, 5, 7). The establishment of these practices would “enable better oversight mechanisms on the market as well as create a kind of economic incentive for market actors to improve the quality of documentation of existing practices in order to comply with the AI Act” (Candidate 5).
2.2.4 Assurance DocumentationAuditing
Auditing emerged through most interviews as a key established methodology that “will be at the core of compliance with the AI Act” (Candidate 5) and that can address the current issues of understanding the impact of the systems through the development phase.
2.2.3 Auditing & ComplianceAssurance Practices
Another important topic that emerged through two interviews is the “necessity of establishing assurance practices” (Candidate 3), which are “methodologies to improve oversight mechanisms by verifying the validity of the claims made” (Candidates 3, 7). Assurance practices provide a framework to conduct “controls to ensure a certain technology does not produce wrong results” (Candidate 7).
2.2.4 Assurance DocumentationData Quality
Moreover, developing methodologies to assess data quality emerged throughout three interviews. An interviewee suggested that a “solution could be to apply to data the same regulations and requirements that are enacted in the supply chain to disincentivise the development of practices that explore how the data was sourced, similarly to what happens in the trade of raw materials” (Candidate 2). Data quality practices for instance should not merely cover the statistical properties of the training sets but also “guarantee that no infringements to human rights were conducted in the sourcing phase” (Candidate 2).
2.2.1 Risk AssessmentCross-disciplinary training and collaboration
An important gap that the interviews identified is the lack of cooperation between the relevant stakeholders in AI systems; there is a lack of interdisciplinarity, both in research and jurisdiction (Candidates 1, 2, 4, 5, 6, 9, 10). The development and deployment of AI is “a multi-stakeholder subject, so one needs somebody who knows about these legal issues and how to translate them” into requirements for an AI product, this person should also know how legal departments work” (Candidate 6). Another interview candidate explained the necessity to: “I guess it would be good to reshape the organisation that every team has then one legal guy as an interface expert. In order to have the linkage in each team to all of these different regulations.” (Candidate 10)
2.1.2 Roles & AccountabilityTooling and systemic practices
Finally, one interview highlighted the “necessity of applying tools implemented at a system level” (Candidate 6). In general software systems, part of the quality assessment practices is implemented at the system level, documentation is mostly generated automatically from code, tests are executed by the Continuous Integration system, and versioning is an integral part of the daily work, to give a few examples.
2.4.3 Development WorkflowsState of the art and best practices
The first theme that emerged through the analysis concerns the practices that the relevant stakeholders are already adopting or exploring. The experts revealed the best methodologies that attempt at translating the ethical and legal requirements into actionable measures; and gave an overview of what is missing that could be adopted.
99.9 OtherState of the art and best practices > Transparency and explainability
The findings show the experts’ efforts into understanding how to “achieve transparency and explainability, whilst preserving intellectual property for AI, by following the procedures established by patent law” (Candidate 1).
2.4.2 Design StandardsState of the art and best practices > Corporate social responsibility
Another best practice is “corporate social responsibility” and the necessity to adopt better oversight mechanisms for corporate responsibility as “it is problematic to translate them into a legal requirement” (Candidate 2).
2.1 Oversight & AccountabilityState of the art and best practices > Testing
An important practice that emerged is the “leveraging of the testing methodologies, already established in the software engineering domain, to ensure that errors are identified before deployment” (Candidate 5). Among the testing methodologies that could be adapted to AI are “red team testing methodologies from the field of penetration testing” (Candidate 3)
2.2.2 Testing & EvaluationState of the art and best practices > Accountability
Moreover, accountability is a key practice “to define roles and obligations for all relevant stakeholders” (Candidate 5), and “further to comply with the upcoming regulation” (Candidate 6).
2.1.2 Roles & AccountabilityOperational issues in the AI act
Most interviewees agreed that one of the current biggest challenges around regulating AI is to operationalise the legal requirements of the AI Act. It must be noted that the interviews were conducted before December 6th, 2022, when the Council of Europe published an updated, compromised version of the AI Act. Such reformulation should deliver sufficient information to distinguish software systems from AI systems2
3.1.1 Legislation & PolicyAI Regulation Is (not) All You Need
Lucaj, Laura; van der Smagt, Patrick; Benbouzid, Djalel (2023)
The development of processes and tools for ethical, trustworthy, and legal AI is only beginning. At the same time, legal requirements are emerging in various jurisdictions, following a deluge of ethical guidelines. It is therefore key to explore the necessary practices that must be adopted to ensure the quality of AI systems, mitigate their potential risks and enable legal compliance. Ensuring that the potential negative impacts of AI on individuals, society, and the environment are mitigated will depend on many factors, including the capacity to properly regulate its deployment and to mandate necessary internal best practices along lifecycles. Regulatory frameworks must evolve from abstract requirements to providing concrete operational mandates that enable better oversight mechanisms in the way AI systems operate, how they are developed, and how they are deployed. In view of the above, this paper explores the necessary practices that can be adopted throughout a comprehensive lifecycle audit as a key practice to ensure the quality of AI systems and enable the development of compliance mechanisms. It also discusses novel governance tools that enable bridging the current operational gaps. Such gaps were identified by interviewing experts, analysing adaptable tools and methodologies from the software engineering domain, and by exploring the state of the art of auditing. The results present recommendations for novel tools and oversight mechanisms for governing AI systems. © 2023 ACM.
Other (multiple stages)
Applies across multiple lifecycle stages
Other (multiple actors)
Applies across multiple actor types
Govern
Policies, processes, and accountability structures for AI risk management