This page is still being polished. If you have thoughts, please share them via the feedback form.
Data on this page is preliminary and may change. Please do not share or cite these figures publicly.
Unclassifiable mitigations.
International Cooperation Proposals
Given these challenges, a flexible, multi-tiered framework for international AI regulation is the global desideratum. This framework should operate through a combination of binding international agreements and voluntary, non-binding standards that can adapt to regional and sectoral differences. Drawing from previous research, we identify key components for such a framework: • Global AI Risk Taxonomy: A global AI risk taxonomy, akin to the one proposed in this survey, could provide a unified language for discussing AI risks across sectors and countries. This taxonomy would categorize risks into different levels, focusing on critical areas such as data privacy, bias, and misuse. By standardizing how risks are understood and communicated, international actors can more easily align on policy priorities. • International AI Standards: Building on existing initiatives, international standards could be developed under the auspices of a global body such as the United Nations or the World Trade Organization. These standards would focus on ensuring AI systems’ transparency, accountability, and fairness. A global AI council could be established to oversee compliance with these standards, with mechanisms for voluntary reporting and peer review (Zhang et al., 2022). • Data Governance and Privacy: One of the most critical areas for cooperation is data governance, given the central role of data in AI development. The General Data Protection Regulation (GDPR) in Europe provides a template for robust data protection, but international frameworks are needed to manage cross-border data flows and prevent abuses, particularly in countries with weaker regulatory environments (Barker, 2023). This could involve multilateral agreements on data-sharing protocols and privacy standards, facilitated by international organizations. • Ethics and Human Rights: AI governance must be grounded in universal ethical principles and human rights. This includes ensuring that AI technologies do not exacerbate existing inequalities or infringe on human rights. The UNESCO Recommendation on the Ethics of AI is a step in this direction, but further cooperation is required to ensure that these principles are embedded in national laws and international agreements (Meltzer & Kerry, 2021). • Technological Sovereignty and National Security: To address concerns about the geopolitical implications of AI, international treaties should balance technological sovereignty with security cooperation. This would ensure that AI technologies with potential military applications are subject to export controls and international oversight (Zhang et al., 2022). In conclusion, as AI technology continues to evolve, the need for international cooperation in its regulation becomes more urgent. National regulations, while necessary, are insufficient to manage the global risks posed by AI. An international framework that promotes cooperation, harmonizes standards, and balances innovation with safety is essential. Drawing on existing regulatory efforts and risk taxonomies, we propose a flexible, multi-tiered framework with identified key components to AI governance, one that can adapt to the diverse needs and priorities of different countries while ensuring the responsible development of AI.
3.3 Voluntary & CooperativeTechnical oversight proposals
Technical oversight in AI regulation involves a variety of components that ensure AI systems adhere to ethical and safety standards. These components include transparency and explainability, auditing and monitoring, accountability mechanisms, and the establishment of safety standards and certification processes.
3.1 Legal & RegulatoryEthics and Compliance Proposals
The dual imperatives of ethical AI and regulatory compliance require robust frameworks that can adapt to the growing complexities of AI. This survey seeks to analyze current ethics and compliance proposals, with a focus on human-centered, responsible AI (HCR-AI), the need for sustainability, and the implementation of AI governance models.
3.2.2 Technical StandardsValue Misalignment
99.9 OtherValue Misalignment > Mitigating social bias
1 AI SystemValue Misalignment > Privacy protection
1 AI SystemValue Misalignment > Methods for mitigating toxicity
1 AI SystemValue Misalignment > Methods for mitigating LLM amorality
1 AI SystemRobustness to attack
1 AI SystemLarge Language Model Safety: A Holistic Survey
Shi, Dan; Shen, Tianhao; Huang, Yufei; Li, Zhigen; Leng, Yongqi; Jin, Renren; Liu, Chuang; Wu, Xinwei; Guo, Zishan; Yu, Linhao; Shi, Ling; Jiang, Bojian; Xiong, Deyi (2024)
The rapid development and deployment of large language models (LLMs) have introduced a new frontier in artificial intelligence, marked by unprecedented capabilities in natural language understanding and generation. However, the increasing integration of these models into critical applications raises substantial safety concerns, necessitating a thorough examination of their potential risks and associated mitigation strategies. This survey provides a comprehensive overview of the current landscape of LLM safety, covering four major categories: value misalignment, robustness to adversarial attacks, misuse, and autonomous AI risks. In addition to the comprehensive review of the mitigation methodologies and evaluation resources on these four aspects, we further explore four topics related to LLM safety: the safety implications of LLM agents, the role of interpretability in enhancing LLM safety, the technology roadmaps proposed and abided by a list of AI companies and institutes for LLM safety, and AI governance aimed at LLM safety with discussions on international cooperation, policy proposals, and prospective regulatory directions. Our findings underscore the necessity for a proactive, multifaceted approach to LLM safety, emphasizing the integration of technical solutions, ethical considerations, and robust governance frameworks. This survey is intended to serve as a foundational resource for academy researchers, industry practitioners, and policymakers, offering insights into the challenges and opportunities associated with the safe integration of LLMs into society. Ultimately, it seeks to contribute to the safe and beneficial development of LLMs, aligning with the overarching goal of harnessing AI for societal advancement and well-being. A curated list of related papers has been publicly available at https://github.com/tjunlp-lab/Awesome-LLM-Safety-Papers.
Other (outside lifecycle)
Outside the standard AI system lifecycle
Governance Actor
Regulator, standards body, or oversight entity shaping AI policy
Unable to classify
Could not be classified to a specific AIRM function
Primary
6.5 Governance failure