This page is still being polished. If you have thoughts, please share them via the feedback form.
Data on this page is preliminary and may change. Please do not share or cite these figures publicly.
Techniques to remove, bound, or modify learned model capabilities post-training.
Also in Model
Interpretability techniques provide a unique perspective on mitigating these biases by revealing the mechanisms through which biases are embedded within models.
Social biases existing in training data for LLMs have raised concerns about the exacerbation of societal biases with the deployment of LLMs in real-world scenarios. Plenty of efforts have been dedicated to detecting and eliminating social biases in LLMs (Sanh et al., 2020; Joniak & Aizawa, 2022a; Fleisig et al., 2023; Rakshit et al., 2024). Common debiasing methods based on retraining or fine-tuning LMMs with anti-bias datasets have certain limitations, such as limited generalization ability, high cost, and catastrophic forgetting (Zhao et al., 2024a). Interpretability techniques provide a unique perspective on mitigating these biases by revealing the mechanisms through which biases are embedded within models. For instance, Ma et al. (2023) effectively debiase LLMs by detecting biased encodings through probing attention heads and evaluating their attributions, followed by pruning these biased encodings. Inspired by induction heads, Yang et al. (2023d) measured the bias scores of attention heads focusing on specific stereotypes in pre-trained LLMs. They identify biased heads by comparing the changes in attention scores between biased heads and regular heads. By masking the identified biased heads, they effectively reduced the gender bias encoded in LLMs. Liu et al. (2024h) explores an interpretability method to mitigate social biases in LLMs by introducing the concept of social bias neurons. First, they introduce an integrated gap gradient similar to the gradient-based attribution method, which precisely locates social bias neurons by backpropagating and integrating the gradients of the logits gap. Then, they mitigate social bias by suppressing the activation of the precisely located neurons. Extensive experiments validate the effectiveness of their method and reveal the potential applicability of interpretability methods in eliminating biases in LLMs.
Reasoning
Modifies training data composition to remove bias patterns before model learning.
Value Misalignment
99.9 OtherValue Misalignment > Mitigating social bias
1 AI SystemValue Misalignment > Privacy protection
1 AI SystemValue Misalignment > Methods for mitigating toxicity
1 AI SystemValue Misalignment > Methods for mitigating LLM amorality
1 AI SystemRobustness to attack
1 AI SystemLarge Language Model Safety: A Holistic Survey
Shi, Dan; Shen, Tianhao; Huang, Yufei; Li, Zhigen; Leng, Yongqi; Jin, Renren; Liu, Chuang; Wu, Xinwei; Guo, Zishan; Yu, Linhao; Shi, Ling; Jiang, Bojian; Xiong, Deyi (2024)
The rapid development and deployment of large language models (LLMs) have introduced a new frontier in artificial intelligence, marked by unprecedented capabilities in natural language understanding and generation. However, the increasing integration of these models into critical applications raises substantial safety concerns, necessitating a thorough examination of their potential risks and associated mitigation strategies. This survey provides a comprehensive overview of the current landscape of LLM safety, covering four major categories: value misalignment, robustness to adversarial attacks, misuse, and autonomous AI risks. In addition to the comprehensive review of the mitigation methodologies and evaluation resources on these four aspects, we further explore four topics related to LLM safety: the safety implications of LLM agents, the role of interpretability in enhancing LLM safety, the technology roadmaps proposed and abided by a list of AI companies and institutes for LLM safety, and AI governance aimed at LLM safety with discussions on international cooperation, policy proposals, and prospective regulatory directions. Our findings underscore the necessity for a proactive, multifaceted approach to LLM safety, emphasizing the integration of technical solutions, ethical considerations, and robust governance frameworks. This survey is intended to serve as a foundational resource for academy researchers, industry practitioners, and policymakers, offering insights into the challenges and opportunities associated with the safe integration of LLMs into society. Ultimately, it seeks to contribute to the safe and beneficial development of LLMs, aligning with the overarching goal of harnessing AI for societal advancement and well-being. A curated list of related papers has been publicly available at https://github.com/tjunlp-lab/Awesome-LLM-Safety-Papers.
Build and Use Model
Training, fine-tuning, and integrating the AI model
Developer
Entity that creates, trains, or modifies the AI system
Manage
Prioritising, responding to, and mitigating AI risks