This page is still being polished. If you have thoughts, please share them via the feedback form.
Data on this page is preliminary and may change. Please do not share or cite these figures publicly.
Oversight agencies, supervisory organizations, and regulatory authorities for AI governance.
Also in Legal & Regulatory
This is the primary procedure in ethical regulation, where AI systems serve as inputs to the process (as indicated by ‘‘AI in’’ and Path 1 in Fig. 2). Because AI systems are rapidly evolving [24], they require real-time monitoring and feedback throughout their entire lifecycle to facilitate updates and iterations (represented by ‘‘AI 1’’, ‘‘AI 2’’, ... , ‘‘AI n’’ in Fig. 2). This process ensures that the final application state of AI system adheres to ethical requirements in real-world (as shown by AI out and Path 2 in Fig. 2)
In this process, users serve as typical sensor-like roles, identifying issues (Path 3) that they can report to developer (Path 8), reviewers or relevant entities (Path 6), such as ethics review committees, to prompt intensified reviews (Path 4 downward) and corrective actions by developers (Path 5 downward). The typical reviewer is the ethics review committee charged with the review task (Path 4 downward). Upon identifying ethical issues during the review (Path 4 upward) or from user feedback (Path 6), the committee should make timely decisions and require developers to implement improvements or terminate their work (Paths 7 and 5 downward), thus assuming both sensor-like and controller-like roles. The developer assumes a pivotal actuator-like role but is also responsible for self-monitoring (Path 5 upward) and actively making improvements (Path 5 downward) to meet regulatory requirements, thereby assuming sensor-like, controller-like, and actuator-like roles. In the event of an unexpected ethical incident that has caused or may cause harm, an emergency response plan should be promptly activated. This represents a special case within the ethical review process, where the emergency team assumes a controller-like role, coordinating multiple stakeholders to respond swiftly and mitigate the impact. After designing the overall process, further practical implementation can involve developing sub-processes or steps, such as deploying monitoring tools, establishing monitoring baselines, setting up alerts and response actions, and implementing data storage and auditing protocols.
Reasoning
Establishes formal governance framework and procedural policy for evaluating AI systems ethically.
Incentive and punishment process
This process draws on behavioral psychology’s reinforcement concept, using rewards and penalties as reinforcement methods to ensure the effective transformation of the regulatory mechanism in practice.
3.1.5 Enforcement MechanismsMechanism improvement process
This process is essential for optimizing the regulatory mechanism, emphasizing the iterative evaluation and dynamic adjustment of ethical regulatory processes. This approach ensures that review procedures are continually refined to address emerging ethical issues and operational challenges.
3.1.2 Regulatory BodiesDeveloping an Ethical Regulatory Framework for Artificial Intelligence: Integrating Systematic Review, Thematic Analysis, and Multidisciplinary Theories
Wang, Jian; Huo, Yujia; Mahe, Jinli; Ge, Zongyuan; Liu, Zhangdaihong; Wang, Wenxin; Zhang, Lin (2024)
Artificial intelligence (AI) ethics has emerged as a global discourse within both academic and policy spheres. However, translating these principles into concrete, real-world applications for AI development remains a pressing need and a significant challenge. This study aims to bridge the gap between principles and practice from a regulatory government perspective and promote best practices in AI governance. To this end, we developed the Ethical Regulatory Framework for AI (ERF-AI) to guide regulatory bodies in constructing mechanisms, including role setups, procedural configurations, and strategy design. The framework was developed through a systematic review, thematic analysis, and the integration of interdisciplinary concepts. A comprehensive search was conducted across four electronic databases (PubMed, IEEE Xplore, Web of Science, and Scopus) and four additional sources containing AI standards and guidelines from various countries and international organizations, focusing on studies published from 2014 to 2024. Thematic analysis identified and refined key themes from the included literature and integrated concepts from process control theory, computer science, organizational management, information technology, and behavioral psychology. This study adhered to the PRISMA guidelines and employed NVivo for thematic analysis. The resulting framework encompasses 23 themes, particularly emphasizing three feedback-loop processes: the ethical review process, the incentive and penalty process, and the mechanism improvement process, offering theoretical guidance for the construction of ethical regulatory mechanisms. Based on this framework, a seven-step process and case examples for mechanism design are presented, enhancing the practicality of ERF-AI in developing ethical regulatory mechanisms. Future research is expected to explore customization of the framework to remain responsive to emerging AI trends and challenges, supported by empirical studies and rigorous testing for further refinement and expansion. © 2024 IEEE.
Operate and Monitor
Running, maintaining, and monitoring the AI system post-deployment
Deployer
Entity that integrates and deploys the AI system for end users
Manage
Prioritising, responding to, and mitigating AI risks
Primary
6.5 Governance failure