This page is still being polished. If you have thoughts, please share them via the feedback form.
Data on this page is preliminary and may change. Please do not share or cite these figures publicly.
Techniques to remove, bound, or modify learned model capabilities post-training.
Also in Model
Interpretability methods can be used to identify and reduce toxicity.
A recent study has employed linear probe models and multi-layer perceptron (MLP) block analysis techniques to identify and examine specific value vectors in the GPT-2 that promote toxic outputs. Based on their findings, the researchers of this study have proposed two methods to reduce toxicity. Firstly, by intervening in the model’s forward pass during the generation process (specifically by subtracting toxic vectors), they can reduce the model’s propensity to produce toxic outputs while maintaining the quality of the generated text. Secondly, by utilizing Direct Preference Optimization (DPO) on carefully curated paired datasets, recent studies have discovered that minimal parameter changes were sufficient to bypass toxic vectors, thereby reducing toxic outputs. In this aspect, Geva et al. (2022b) propose a method to mitigate toxic generation by identifying and activating neurons within the feed-forward layers responsible for promoting innocuous or safe words. Balestriero et al. (2023) analyzed and characterized LLMs’ internal multi-head attention mechanisms and feed-forward networks from a geometric perspective. They employ spline formulation (Balestriero et al., 2018) to extract key geometric features from MLPs, which not only reveals the intrinsic structure of the models but also enables the identification and classification of toxic speech without additional training. By feeding prompts with negative and positive prefixes into LLMs, Leong et al. (2023) analyze internal contextualized representations to identify the toxicity direction of each attention head. They then utilize the original context prompt and guide the update of the current value vectors in the opposite of the detected toxicity direction to reduce toxicity
Reasoning
Mitigation name lacks definition and evidence; insufficient information to identify focal activity or implementation mechanism.
Value Misalignment
99.9 OtherValue Misalignment > Mitigating social bias
1 AI SystemValue Misalignment > Privacy protection
1 AI SystemValue Misalignment > Methods for mitigating toxicity
1 AI SystemValue Misalignment > Methods for mitigating LLM amorality
1 AI SystemRobustness to attack
1 AI SystemLarge Language Model Safety: A Holistic Survey
Shi, Dan; Shen, Tianhao; Huang, Yufei; Li, Zhigen; Leng, Yongqi; Jin, Renren; Liu, Chuang; Wu, Xinwei; Guo, Zishan; Yu, Linhao; Shi, Ling; Jiang, Bojian; Xiong, Deyi (2024)
The rapid development and deployment of large language models (LLMs) have introduced a new frontier in artificial intelligence, marked by unprecedented capabilities in natural language understanding and generation. However, the increasing integration of these models into critical applications raises substantial safety concerns, necessitating a thorough examination of their potential risks and associated mitigation strategies. This survey provides a comprehensive overview of the current landscape of LLM safety, covering four major categories: value misalignment, robustness to adversarial attacks, misuse, and autonomous AI risks. In addition to the comprehensive review of the mitigation methodologies and evaluation resources on these four aspects, we further explore four topics related to LLM safety: the safety implications of LLM agents, the role of interpretability in enhancing LLM safety, the technology roadmaps proposed and abided by a list of AI companies and institutes for LLM safety, and AI governance aimed at LLM safety with discussions on international cooperation, policy proposals, and prospective regulatory directions. Our findings underscore the necessity for a proactive, multifaceted approach to LLM safety, emphasizing the integration of technical solutions, ethical considerations, and robust governance frameworks. This survey is intended to serve as a foundational resource for academy researchers, industry practitioners, and policymakers, offering insights into the challenges and opportunities associated with the safe integration of LLMs into society. Ultimately, it seeks to contribute to the safe and beneficial development of LLMs, aligning with the overarching goal of harnessing AI for societal advancement and well-being. A curated list of related papers has been publicly available at https://github.com/tjunlp-lab/Awesome-LLM-Safety-Papers.
Build and Use Model
Training, fine-tuning, and integrating the AI model
Developer
Entity that creates, trains, or modifies the AI system
Manage
Prioritising, responding to, and mitigating AI risks