This page is still being polished. If you have thoughts, please share them via the feedback form.
Data on this page is preliminary and may change. Please do not share or cite these figures publicly.
Technical mechanisms and engineering interventions that directly modify how an AI system processes inputs, generates outputs, or operates, including changes to models, training procedures, runtime behaviors, and supporting hardware.
Reasoning
Filters harmful content from training data to prevent model learning toxicity.
Value Misalignment
Methods for mitigating toxicity: pre-training phase
Training data from web sources often contains toxic content. To mitigate this, existing detoxification methods typically apply toxicity filters during the pretraining phase to remove data with high toxicity scores from the training set
1.1.1 Training DataMethods for mitigating toxicity: supervised fine-tuning phase
Toxicity filtering during the pre-training stage requires training LLMs from scratch, which is impractical for many applications. Therefore, detoxification during the fine-tuning stage offers a more flexible way to deal with toxicity in LLMs.
1.1.2 Learning ObjectivesMethods for mitigating toxicity: alignment phase
Previous research has attempted to align the outputs of language models with human preferences using methods such as Reinforcement Learning from Human Feedback (RLHF) (Bai et al., 2022b) and Reinforcement Learning from AI Feedback (RLAIF) (Lee et al., 2024b). However, studies have shown that these methods are not effective in significantly reducing the toxicity of language models (Ouyang et al., 2022). In response, Kim & Lee (2024) introduce adversarial training based on Direct Preference Optimization (DPO), incorporating harmful content generated by a built-in toxic model as training samples. They further include an additional penalty term in the objective function to reduce the likelihood of the model-generating toxic responses. However, Lee et al. (2024a), by analyzing GPT-2 parameters before and after DPO, have find that the toxicity vectors do not change significantly. This suggests that the post-DPO model learns an “offset” in the residual stream to bypass areas that activate toxic vectors, thus preventing toxic outputs without significantly impacting the model’s overall capabilities. As a result, models aligned in this way remain vulnerable to adversarial prompts, which can lead to jailbreak scenarios.
1.1.2 Learning ObjectivesValue Misalignment
99.9 OtherValue Misalignment > Mitigating social bias
1 AI SystemValue Misalignment > Privacy protection
1 AI SystemValue Misalignment > Methods for mitigating LLM amorality
1 AI SystemRobustness to attack
1 AI SystemRobustness to attack > Red teaming
Red teaming is widely used in LLMs to explore their safety vulnerabilities prior to the deployment of them. Red teaming can be broadly categorized into two distinct types: manual red teaming and automated red teaming
2.2.2 Testing & EvaluationLarge Language Model Safety: A Holistic Survey
Shi, Dan; Shen, Tianhao; Huang, Yufei; Li, Zhigen; Leng, Yongqi; Jin, Renren; Liu, Chuang; Wu, Xinwei; Guo, Zishan; Yu, Linhao; Shi, Ling; Jiang, Bojian; Xiong, Deyi (2024)
The rapid development and deployment of large language models (LLMs) have introduced a new frontier in artificial intelligence, marked by unprecedented capabilities in natural language understanding and generation. However, the increasing integration of these models into critical applications raises substantial safety concerns, necessitating a thorough examination of their potential risks and associated mitigation strategies. This survey provides a comprehensive overview of the current landscape of LLM safety, covering four major categories: value misalignment, robustness to adversarial attacks, misuse, and autonomous AI risks. In addition to the comprehensive review of the mitigation methodologies and evaluation resources on these four aspects, we further explore four topics related to LLM safety: the safety implications of LLM agents, the role of interpretability in enhancing LLM safety, the technology roadmaps proposed and abided by a list of AI companies and institutes for LLM safety, and AI governance aimed at LLM safety with discussions on international cooperation, policy proposals, and prospective regulatory directions. Our findings underscore the necessity for a proactive, multifaceted approach to LLM safety, emphasizing the integration of technical solutions, ethical considerations, and robust governance frameworks. This survey is intended to serve as a foundational resource for academy researchers, industry practitioners, and policymakers, offering insights into the challenges and opportunities associated with the safe integration of LLMs into society. Ultimately, it seeks to contribute to the safe and beneficial development of LLMs, aligning with the overarching goal of harnessing AI for societal advancement and well-being. A curated list of related papers has been publicly available at https://github.com/tjunlp-lab/Awesome-LLM-Safety-Papers.
Build and Use Model
Training, fine-tuning, and integrating the AI model
Developer
Entity that creates, trains, or modifies the AI system
Manage
Prioritising, responding to, and mitigating AI risks