This page is still being polished. If you have thoughts, please share them via the feedback form.
Data on this page is preliminary and may change. Please do not share or cite these figures publicly.
Laws, legal frameworks, and binding policy instruments governing AI development and use.
Also in Legal & Regulatory
Reasoning
Mitigation name and evidence placeholders prevent identifying focal activity or implementation location.
Risk-Based Regulatory Approach
A prominent direction for future AI regulation is the adoption of a risk-based regulatory framework. Under this approach, AI applications are classified based on their potential risks to individuals, society, and national security. High-risk applications, such as those used in critical sectors like healthcare, autonomous vehicles, or criminal justice, would be subject to stringent regulatory oversight, including mandatory transparency, regular audits, and accountability mechanisms. Lower-risk applications, like AIpowered customer service tools, might face more lenient requirements, promoting innovation without unnecessary regulatory burdens. A risk-based approach ensures that regulation is proportional to the level of threat posed by an AI system, allowing for both innovation and protection. Policymakers must also consider sector-specific standards and harmonize regulations across international borders to avoid fragmented approaches that could stifle global cooperation.
3.1.1 Legislation & PolicyPromoting Ethical AI Development
Ethical AI development is another crucial pillar of future AI policy. Policymakers need to prioritize the establishment of ethical guidelines that ensure AI systems are designed and deployed in ways that respect human rights, fairness, and non-discrimination. This includes implementing mechanisms to eliminate bias in AI algorithms, enhancing transparency in decision-making processes, and ensuring that AI systems are accountable for their outcomes. Moreover, public participation and multistakeholder collaboration, including input from ethicists, civil society, and industry experts, should be a core component of regulatory development. This inclusive approach will allow for the consideration of diverse perspectives, ensuring that AI regulation reflects societal values and priorities.
3.2.2 Technical StandardsInternational Cooperation and Standards Harmonization
As AI is a global technology, international cooperation is essential to creating effective and consistent regulatory frameworks. Countries should work together to develop shared standards and principles that can guide the responsible development and deployment of AI. Establishing global norms will help prevent regulatory arbitrage, where companies seek the least restrictive environments, and ensure that AI systems adhere to ethical standards no matter where they are developed or used. International organizations, such as the United Nations, and the Organization for Economic Co-operation and Development (OECD), are already taking steps to foster global dialogue on AI regulation. Future policy directions should build on these efforts, encouraging the creation of international treaties or agreements that promote ethical AI while balancing innovation with accountability. AI regulation is a rapidly evolving field that requires adaptable, forward-looking policies. A risk-based regulatory approach, ethical guidelines, and international cooperation are key to ensuring that AI technologies contribute positively to society while mitigating their potential risks. Future policymakers must work collaboratively with stakeholders across sectors and borders to create a regulatory environment that fosters innovation, protects human rights, and ensures that AI serves the common good.
3.3.2 International CoordinationValue Misalignment
99.9 OtherValue Misalignment > Mitigating social bias
1 AI SystemValue Misalignment > Privacy protection
1 AI SystemValue Misalignment > Methods for mitigating toxicity
1 AI SystemValue Misalignment > Methods for mitigating LLM amorality
1 AI SystemRobustness to attack
1 AI SystemLarge Language Model Safety: A Holistic Survey
Shi, Dan; Shen, Tianhao; Huang, Yufei; Li, Zhigen; Leng, Yongqi; Jin, Renren; Liu, Chuang; Wu, Xinwei; Guo, Zishan; Yu, Linhao; Shi, Ling; Jiang, Bojian; Xiong, Deyi (2024)
The rapid development and deployment of large language models (LLMs) have introduced a new frontier in artificial intelligence, marked by unprecedented capabilities in natural language understanding and generation. However, the increasing integration of these models into critical applications raises substantial safety concerns, necessitating a thorough examination of their potential risks and associated mitigation strategies. This survey provides a comprehensive overview of the current landscape of LLM safety, covering four major categories: value misalignment, robustness to adversarial attacks, misuse, and autonomous AI risks. In addition to the comprehensive review of the mitigation methodologies and evaluation resources on these four aspects, we further explore four topics related to LLM safety: the safety implications of LLM agents, the role of interpretability in enhancing LLM safety, the technology roadmaps proposed and abided by a list of AI companies and institutes for LLM safety, and AI governance aimed at LLM safety with discussions on international cooperation, policy proposals, and prospective regulatory directions. Our findings underscore the necessity for a proactive, multifaceted approach to LLM safety, emphasizing the integration of technical solutions, ethical considerations, and robust governance frameworks. This survey is intended to serve as a foundational resource for academy researchers, industry practitioners, and policymakers, offering insights into the challenges and opportunities associated with the safe integration of LLMs into society. Ultimately, it seeks to contribute to the safe and beneficial development of LLMs, aligning with the overarching goal of harnessing AI for societal advancement and well-being. A curated list of related papers has been publicly available at https://github.com/tjunlp-lab/Awesome-LLM-Safety-Papers.
Other (outside lifecycle)
Outside the standard AI system lifecycle
Governance Actor
Regulator, standards body, or oversight entity shaping AI policy
Govern
Policies, processes, and accountability structures for AI risk management
Primary
6.5 Governance failure