This page is still being polished. If you have thoughts, please share them via the feedback form.
Data on this page is preliminary and may change. Please do not share or cite these figures publicly.
Changes to the model's learned parameters, architecture, or training process, including modifications to training data that affect what the model learns.
Also in AI System
Internal protection involves modifying an LLM during the process of training or tuning to improve its ability to defend against various attacks, reduce risks, and ensure safe behaviors in real-world applications.
Fu et al. (2024) propose single- and mixed-task losses for instruction tuning and demonstrate that LLMs can significantly boost safe management of risky content via appropriate instruction tuning, thus defending for attacks involving malicious long documents. Han et al. (2023) inject three types of defense functions into the different stages of Federated LLMs in federal learning aggregation to support the defense mechanism for adversarial attacks. Hasan et al. (2024) demonstrate that employing moderate WANDA pruning can bolster LLM’s defense against jailbreak attacks while obviating the need for fine-tuning. The WANDA pruning (Sun et al., 2024c) involves removing a subset of network weights, with the goal of preserving performance. In addition, a considerable number of studies choose fine-tuning (Touvron et al., 2023) or instruction-tuning (Deng et al., 2023b) to strengthen LLM’s defense against prompt attacks. Liu et al. (2024c) introduce a two-stage adversarial tuning framework, which enhances the ability of LLMs to resist unknown jailbreak attacks through iterative improvement of confrontational prompts. Touvron et al. (2023) collect adversarial prompts and their safety demonstrations, subsequently integrating these samples into the general supervised fine-tuning pipeline. Correspondingly, Yi et al. (2023) apply adversarial training to the self-supervised fine-tuning stage of LLMs, teaching them to ignore instructions embedded in external content, thus enhancing their robustness to indirect prompt injection attacks. Despite effectiveness, internal protection could increase the complexity of LLMs, and reduce their interpretability and maintainability. It is worth mentioning that current defense strategies focus too much on defending attacks and ignore the reduction of effectiveness. Varshney et al. (2024) propose that the ideal defense strategy should make LLMs safe against “unsafe prompts” rather than over-defense on “safe prompt”. They also propose an evaluation benchmark termed Safety and OverDefensiveness Eval (SODE), and their experiment results lead to important findings. For example, self-checking does improve the safety of inputs, but at the cost of extremely excessive defense.
Reasoning
Mitigation name and definition/evidence placeholders prevent identifying focal activity or implementation location.
Attack defense
Value Misalignment
99.9 OtherValue Misalignment > Mitigating social bias
1 AI SystemValue Misalignment > Privacy protection
1 AI SystemValue Misalignment > Methods for mitigating toxicity
1 AI SystemValue Misalignment > Methods for mitigating LLM amorality
1 AI SystemRobustness to attack
1 AI SystemLarge Language Model Safety: A Holistic Survey
Shi, Dan; Shen, Tianhao; Huang, Yufei; Li, Zhigen; Leng, Yongqi; Jin, Renren; Liu, Chuang; Wu, Xinwei; Guo, Zishan; Yu, Linhao; Shi, Ling; Jiang, Bojian; Xiong, Deyi (2024)
The rapid development and deployment of large language models (LLMs) have introduced a new frontier in artificial intelligence, marked by unprecedented capabilities in natural language understanding and generation. However, the increasing integration of these models into critical applications raises substantial safety concerns, necessitating a thorough examination of their potential risks and associated mitigation strategies. This survey provides a comprehensive overview of the current landscape of LLM safety, covering four major categories: value misalignment, robustness to adversarial attacks, misuse, and autonomous AI risks. In addition to the comprehensive review of the mitigation methodologies and evaluation resources on these four aspects, we further explore four topics related to LLM safety: the safety implications of LLM agents, the role of interpretability in enhancing LLM safety, the technology roadmaps proposed and abided by a list of AI companies and institutes for LLM safety, and AI governance aimed at LLM safety with discussions on international cooperation, policy proposals, and prospective regulatory directions. Our findings underscore the necessity for a proactive, multifaceted approach to LLM safety, emphasizing the integration of technical solutions, ethical considerations, and robust governance frameworks. This survey is intended to serve as a foundational resource for academy researchers, industry practitioners, and policymakers, offering insights into the challenges and opportunities associated with the safe integration of LLMs into society. Ultimately, it seeks to contribute to the safe and beneficial development of LLMs, aligning with the overarching goal of harnessing AI for societal advancement and well-being. A curated list of related papers has been publicly available at https://github.com/tjunlp-lab/Awesome-LLM-Safety-Papers.
Build and Use Model
Training, fine-tuning, and integrating the AI model
Developer
Entity that creates, trains, or modifies the AI system
Manage
Prioritising, responding to, and mitigating AI risks
Other