This page is still being polished. If you have thoughts, please share them via the feedback form.
Data on this page is preliminary and may change. Please do not share or cite these figures publicly.
Design-time architectural choices affecting safety, interpretability, and modularity.
Also in Model
Reasoning
Model architecture design choices enhance interpretability for safety analysis and risk mitigation.
Interpretability for LLM abilities
A deep understanding of model abilities helps us comprehend how LLMs learn, think, and make decisions, thereby identifying potential safety risks.
2.4.1 Research & FoundationsInterpretability for alignment
In contrast, interpretability aiming to understand the internal mechanisms of LLMs, provides an alternative solution to these problems (Wu et al., 2024b). This is because interpretability can be used as a tool to identify safety-related features (e.g., privacy, bias), which could be explored to steer LLMs towards desired behaviors (e.g., privacy-preserving text generation, unbiased text generation).
2.4.1 Research & FoundationsBroader and Deeper Coverage of Capable Models and Behaviors
Currently, many interpretability studies are primarily based on toy or theoretical models, and the applicability and extensibility of the findings on these models to other models have not yet been fully verified. To make substantive progress in production environments and industrial applications, it is necessary to shift the research focus to more complex models and real-world application scenarios. In-depth research on capable models in their behavior in real environments will help develop more practical interpretability methods, enhancing the reliability and transparency of models in actual applications
2.4.1 Research & FoundationsTowards Universality
Current interpretability research mostly focuses on models with the same architectures. This limitation restricts the generalizability of interpretability methods across different models and architectures. Identifying universal reasoning patterns within models and developing a unified theoretical framework are crucial for enhancing the generalizability of interpretability research. By establishing common interpretability methods applicable to various tasks and model structures, we can promote the development of interpretability research and enhance its applications across different domains.
2.4.1 Research & FoundationsValue Misalignment
99.9 OtherValue Misalignment > Mitigating social bias
1 AI SystemValue Misalignment > Privacy protection
1 AI SystemValue Misalignment > Methods for mitigating toxicity
1 AI SystemValue Misalignment > Methods for mitigating LLM amorality
1 AI SystemRobustness to attack
1 AI SystemLarge Language Model Safety: A Holistic Survey
Shi, Dan; Shen, Tianhao; Huang, Yufei; Li, Zhigen; Leng, Yongqi; Jin, Renren; Liu, Chuang; Wu, Xinwei; Guo, Zishan; Yu, Linhao; Shi, Ling; Jiang, Bojian; Xiong, Deyi (2024)
The rapid development and deployment of large language models (LLMs) have introduced a new frontier in artificial intelligence, marked by unprecedented capabilities in natural language understanding and generation. However, the increasing integration of these models into critical applications raises substantial safety concerns, necessitating a thorough examination of their potential risks and associated mitigation strategies. This survey provides a comprehensive overview of the current landscape of LLM safety, covering four major categories: value misalignment, robustness to adversarial attacks, misuse, and autonomous AI risks. In addition to the comprehensive review of the mitigation methodologies and evaluation resources on these four aspects, we further explore four topics related to LLM safety: the safety implications of LLM agents, the role of interpretability in enhancing LLM safety, the technology roadmaps proposed and abided by a list of AI companies and institutes for LLM safety, and AI governance aimed at LLM safety with discussions on international cooperation, policy proposals, and prospective regulatory directions. Our findings underscore the necessity for a proactive, multifaceted approach to LLM safety, emphasizing the integration of technical solutions, ethical considerations, and robust governance frameworks. This survey is intended to serve as a foundational resource for academy researchers, industry practitioners, and policymakers, offering insights into the challenges and opportunities associated with the safe integration of LLMs into society. Ultimately, it seeks to contribute to the safe and beneficial development of LLMs, aligning with the overarching goal of harnessing AI for societal advancement and well-being. A curated list of related papers has been publicly available at https://github.com/tjunlp-lab/Awesome-LLM-Safety-Papers.
Verify and Validate
Testing, evaluating, auditing, and red-teaming the AI system
Developer
Entity that creates, trains, or modifies the AI system
Measure
Quantifying, testing, and monitoring identified AI risks
Other