This page is still being polished. If you have thoughts, please share them via the feedback form.
Data on this page is preliminary and may change. Please do not share or cite these figures publicly.
Internal policies, content safety guidelines, and ethical design principles governing system creation.
Also in Engineering & Development
The findings show the experts’ efforts into understanding how to “achieve transparency and explainability, whilst preserving intellectual property for AI, by following the procedures established by patent law” (Candidate 1).
Reasoning
Output filtering mechanism blocks harmful content before user delivery.
State of the art and best practices
The first theme that emerged through the analysis concerns the practices that the relevant stakeholders are already adopting or exploring. The experts revealed the best methodologies that attempt at translating the ethical and legal requirements into actionable measures; and gave an overview of what is missing that could be adopted.
99.9 OtherState of the art and best practices > Corporate social responsibility
Another best practice is “corporate social responsibility” and the necessity to adopt better oversight mechanisms for corporate responsibility as “it is problematic to translate them into a legal requirement” (Candidate 2).
2.1 Oversight & AccountabilityState of the art and best practices > Testing
An important practice that emerged is the “leveraging of the testing methodologies, already established in the software engineering domain, to ensure that errors are identified before deployment” (Candidate 5). Among the testing methodologies that could be adapted to AI are “red team testing methodologies from the field of penetration testing” (Candidate 3)
2.2.2 Testing & EvaluationState of the art and best practices > Accountability
Moreover, accountability is a key practice “to define roles and obligations for all relevant stakeholders” (Candidate 5), and “further to comply with the upcoming regulation” (Candidate 6).
2.1.2 Roles & AccountabilityOperational issues in the AI act
Most interviewees agreed that one of the current biggest challenges around regulating AI is to operationalise the legal requirements of the AI Act. It must be noted that the interviews were conducted before December 6th, 2022, when the Council of Europe published an updated, compromised version of the AI Act. Such reformulation should deliver sufficient information to distinguish software systems from AI systems2
3.1.1 Legislation & PolicyOperational issues in the AI act > Risk Classification
3.1.1 Legislation & PolicyAI Regulation Is (not) All You Need
Lucaj, Laura; van der Smagt, Patrick; Benbouzid, Djalel (2023)
The development of processes and tools for ethical, trustworthy, and legal AI is only beginning. At the same time, legal requirements are emerging in various jurisdictions, following a deluge of ethical guidelines. It is therefore key to explore the necessary practices that must be adopted to ensure the quality of AI systems, mitigate their potential risks and enable legal compliance. Ensuring that the potential negative impacts of AI on individuals, society, and the environment are mitigated will depend on many factors, including the capacity to properly regulate its deployment and to mandate necessary internal best practices along lifecycles. Regulatory frameworks must evolve from abstract requirements to providing concrete operational mandates that enable better oversight mechanisms in the way AI systems operate, how they are developed, and how they are deployed. In view of the above, this paper explores the necessary practices that can be adopted throughout a comprehensive lifecycle audit as a key practice to ensure the quality of AI systems and enable the development of compliance mechanisms. It also discusses novel governance tools that enable bridging the current operational gaps. Such gaps were identified by interviewing experts, analysing adaptable tools and methodologies from the software engineering domain, and by exploring the state of the art of auditing. The results present recommendations for novel tools and oversight mechanisms for governing AI systems. © 2023 ACM.
Other (multiple stages)
Applies across multiple lifecycle stages
Governance Actor
Regulator, standards body, or oversight entity shaping AI policy
Govern
Policies, processes, and accountability structures for AI risk management