This page is still being polished. If you have thoughts, please share them via the feedback form.
Data on this page is preliminary and may change. Please do not share or cite these figures publicly.
Training methods that shape model behavior through objectives, feedback, and optimization targets.
Also in Model
Reinforcement learning from human feedback (RLHF) has proven a useful avenue for enhancing the safety of LLMs at the post-training stage (Ouyang et al., 2022; OpenAI, 2023a; Touvron et al., 2023; Bai et al., 2022a). RLHF performs human feedback-based fine-tuning, which uses human preferences as a proxy to specify human values (Shen et al., 2023). Normally, RLHF consists of three core steps, which are (1) collecting human feedback data, (2) using the collected human feedback data to train reward models, and (3) fine-tuning LLMs using reinforcement learning algorithms such as Proximal Policy Optimization (PPO) (Schulman et al., 2017).
Anthropic further proposes Constitutional AI (Bai et al., 2022b), where a set of moral and behavioral principles, termed as a constitution, is developed for aligning LLMs via supervised learning and reinforcement learning. In Claude 3 (Anthropic, 2024a), the rules of the constitution are derived from the Universal Declaration of Human Rights, Apple’s Terms of Service, Principles Encourage Consideration of Non-Western Perspectives, DeepMind’s Sparrow Rules, and Anthropic Customization Principles (Anthropic, 2023a). Alignment approaches similar to Constitutional AI are also used in Gemini (Anil et al., 2023) and Qwen2 (Yang et al., 2024a). With Constitutional AI, a research team from Google delves into the comparison between reinforcement learning from AI feedback (RLAIF) (Lee et al., 2024b) and RLHF, and demonstrates that RLAIF could be a competitive alternative to RLHF, thus reducing the reliance on expensive human annotation. In addition to the annotation cost, traditional RLHF methods usually involve optimizing the reward function for human preferences, which is effective but may bring about challenges such as increasing computational complexity and the need to consider the bias-variance trade-off when estimating and optimizing rewards (Schulman et al., 2016). To mitigate these issues, DPO (Rafailov et al., 2023) has been proposed to simplify the alignment process, reducing computational overhead and enabling more robust optimization by using preference data in a more direct way. In addition to alignment training, interpretability methods can also be used to achieve safety controls over LLMs. The Center for AI Safety (CAIS) introduces the concept of representation engineering (Zou et al., 2023a) to improve the transparency of AI systems by leveraging cognitive neuroscience. Representation engineering extracts various safety-related concepts through representation learning methods, and then modifies or controls these safety-related conceptual representations through representation control methods to reduce risks associated with LLMs. They conduct extensive case studies using representation engineering on safety aspects such as honesty and ethics, demonstrating the potential of representation engineering to improve transparency, safety, and trust in AI systems.
Reasoning
Training methodology shapes model behavior through learning objectives and optimization targets during training.
Training
Value Misalignment
99.9 OtherValue Misalignment > Mitigating social bias
1 AI SystemValue Misalignment > Privacy protection
1 AI SystemValue Misalignment > Methods for mitigating toxicity
1 AI SystemValue Misalignment > Methods for mitigating LLM amorality
1 AI SystemRobustness to attack
1 AI SystemLarge Language Model Safety: A Holistic Survey
Shi, Dan; Shen, Tianhao; Huang, Yufei; Li, Zhigen; Leng, Yongqi; Jin, Renren; Liu, Chuang; Wu, Xinwei; Guo, Zishan; Yu, Linhao; Shi, Ling; Jiang, Bojian; Xiong, Deyi (2024)
The rapid development and deployment of large language models (LLMs) have introduced a new frontier in artificial intelligence, marked by unprecedented capabilities in natural language understanding and generation. However, the increasing integration of these models into critical applications raises substantial safety concerns, necessitating a thorough examination of their potential risks and associated mitigation strategies. This survey provides a comprehensive overview of the current landscape of LLM safety, covering four major categories: value misalignment, robustness to adversarial attacks, misuse, and autonomous AI risks. In addition to the comprehensive review of the mitigation methodologies and evaluation resources on these four aspects, we further explore four topics related to LLM safety: the safety implications of LLM agents, the role of interpretability in enhancing LLM safety, the technology roadmaps proposed and abided by a list of AI companies and institutes for LLM safety, and AI governance aimed at LLM safety with discussions on international cooperation, policy proposals, and prospective regulatory directions. Our findings underscore the necessity for a proactive, multifaceted approach to LLM safety, emphasizing the integration of technical solutions, ethical considerations, and robust governance frameworks. This survey is intended to serve as a foundational resource for academy researchers, industry practitioners, and policymakers, offering insights into the challenges and opportunities associated with the safe integration of LLMs into society. Ultimately, it seeks to contribute to the safe and beneficial development of LLMs, aligning with the overarching goal of harnessing AI for societal advancement and well-being. A curated list of related papers has been publicly available at https://github.com/tjunlp-lab/Awesome-LLM-Safety-Papers.
Build and Use Model
Training, fine-tuning, and integrating the AI model
Developer
Entity that creates, trains, or modifies the AI system
Manage
Prioritising, responding to, and mitigating AI risks