This page is still being polished. If you have thoughts, please share them via the feedback form.
Data on this page is preliminary and may change. Please do not share or cite these figures publicly.
Cannot be confidently classified due to insufficient information, excessive vagueness, or ambiguity.
Reasoning
Mitigation lacks concrete description; cannot identify focal activity or implementation mechanism.
Exploring safe architectures
Beyond the development of safety mechanisms and modules, rethinking the foundational architecture of LLMs is crucial for long-term safety improvements.
1.1.4 Model ArchitectureSafety control modules
To improve the safety of LLMs, introducing a dedicated safety control module presents a promising avenue for exploration.
1.2.1 Guardrails & FilteringToward Effective and Unified Safety Mechanisms
Current approaches to enhancing the safety of LLMs exhibit notable limitations and inefficiencies, underscoring the urgent need for more robust and effective solutions
1.1.3 Capability ModificationImproving Safety Evaluations for LLMs
First, most existing evaluation metrics are tailored to specific benchmarks or tasks, providing a fragmented and limited view on LLMs (either capability or safety). This specificity highlights the pressing need for a unified evaluation metric/framework capable of comprehensively assessing LLMs across a wide range of scenarios. Such a metric would ensure these models are well-equipped to meet the demands of various tasks and contexts. Such a framework must account for differences in architectures, training data, and intended use cases among LLMs, offering a balance between consistency in evaluation and flexibility to accommodate different model designs.
3.2.1 Benchmarks & EvaluationToward Multivalent International Cooperation and Interdisciplinary Community Building
AI safety governance must evolve to address the growing complexity and global integration of AI technologies. Future directions emphasize the need for multilateral regulatory frameworks that harmonize standards across jurisdictions, ensuring interoperability and joint enforcement mechanisms. Such frameworks should account for diverse ethical and cultural values, integrating cross-sector collaborations that bring together technical, legal, and ethical expertise. Interdisciplinary approaches are essential, particularly in developing governance tools that incorporate both technical and ethical metrics, and conducting human-AI interaction studies to ensure AI systems function equitably across different socio-economic contexts. Moreover, AI governance must focus on multivalent value systems, where ethical imperatives—such as inclusivity and sustainability—shape regulatory practices.
3.3.2 International CoordinationValue Misalignment
99.9 OtherValue Misalignment > Mitigating social bias
1 AI SystemValue Misalignment > Privacy protection
1 AI SystemValue Misalignment > Methods for mitigating toxicity
1 AI SystemValue Misalignment > Methods for mitigating LLM amorality
1 AI SystemRobustness to attack
1 AI SystemLarge Language Model Safety: A Holistic Survey
Shi, Dan; Shen, Tianhao; Huang, Yufei; Li, Zhigen; Leng, Yongqi; Jin, Renren; Liu, Chuang; Wu, Xinwei; Guo, Zishan; Yu, Linhao; Shi, Ling; Jiang, Bojian; Xiong, Deyi (2024)
The rapid development and deployment of large language models (LLMs) have introduced a new frontier in artificial intelligence, marked by unprecedented capabilities in natural language understanding and generation. However, the increasing integration of these models into critical applications raises substantial safety concerns, necessitating a thorough examination of their potential risks and associated mitigation strategies. This survey provides a comprehensive overview of the current landscape of LLM safety, covering four major categories: value misalignment, robustness to adversarial attacks, misuse, and autonomous AI risks. In addition to the comprehensive review of the mitigation methodologies and evaluation resources on these four aspects, we further explore four topics related to LLM safety: the safety implications of LLM agents, the role of interpretability in enhancing LLM safety, the technology roadmaps proposed and abided by a list of AI companies and institutes for LLM safety, and AI governance aimed at LLM safety with discussions on international cooperation, policy proposals, and prospective regulatory directions. Our findings underscore the necessity for a proactive, multifaceted approach to LLM safety, emphasizing the integration of technical solutions, ethical considerations, and robust governance frameworks. This survey is intended to serve as a foundational resource for academy researchers, industry practitioners, and policymakers, offering insights into the challenges and opportunities associated with the safe integration of LLMs into society. Ultimately, it seeks to contribute to the safe and beneficial development of LLMs, aligning with the overarching goal of harnessing AI for societal advancement and well-being. A curated list of related papers has been publicly available at https://github.com/tjunlp-lab/Awesome-LLM-Safety-Papers.
Other (outside lifecycle)
Outside the standard AI system lifecycle
Unable to classify
Could not be classified to a specific actor type
Unable to classify
Could not be classified to a specific AIRM function