This page is still being polished. If you have thoughts, please share them via the feedback form.
Data on this page is preliminary and may change. Please do not share or cite these figures publicly.
Training methods that shape model behavior through objectives, feedback, and optimization targets.
Also in Model
Post-training is crucial for reducing bias in LLMs after the pre-training phase. One common approach is supervised fine-tuning, where a pre-trained model is fine-tuned on more balanced and representative data to reduce bias in specific tasks or domains (Devlin et al., 2019; Huang & Xiong, 2024b). Fine-tuning on curated or debiased datasets can help mitigate inherent biases learned from the original training data. Adversarial debiasing is another effective technique where an adversarial model is trained to detect and neutralize biases in LLM outputs (Zhang et al., 2018), forcing LLMs to generate more fair and balanced representations.
Post-training mitigation also includes techniques such as bias fine-pruning, which selectively removes biased neurons or layers from LLMs (Blakeney et al., 2021), and counterfactual data augmentation, where modified data with altered social attributes (e.g., swapping gender or race identifiers) are used to fine-tune LLMs, helping them learn to treat various social groups more equitably (Lu et al., 2020). Furthermore, knowledge distillation allows for transferring the knowledge of a biased model to a smaller, more task-specific model, with bias reduction constraints applied during the distillation process (Lin et al., 2021). Finally, fairness-constrained fine-tuning introduces fairness objectives directly into the loss function of LLMs, ensuring that the optimization process explicitly balances the next token prediction accuracy with fairness considerations (Zafar et al., 2017). These post-training techniques provide flexible and efficient solutions for mitigating bias, enhancing both the fairness and safety of LLMs without the need for complete retraining
Reasoning
Post-training technique removes or modifies learned social bias capabilities in model.
Mitigating social bias
Value Misalignment
99.9 OtherValue Misalignment > Mitigating social bias
1 AI SystemValue Misalignment > Privacy protection
1 AI SystemValue Misalignment > Methods for mitigating toxicity
1 AI SystemValue Misalignment > Methods for mitigating LLM amorality
1 AI SystemRobustness to attack
1 AI SystemLarge Language Model Safety: A Holistic Survey
Shi, Dan; Shen, Tianhao; Huang, Yufei; Li, Zhigen; Leng, Yongqi; Jin, Renren; Liu, Chuang; Wu, Xinwei; Guo, Zishan; Yu, Linhao; Shi, Ling; Jiang, Bojian; Xiong, Deyi (2024)
The rapid development and deployment of large language models (LLMs) have introduced a new frontier in artificial intelligence, marked by unprecedented capabilities in natural language understanding and generation. However, the increasing integration of these models into critical applications raises substantial safety concerns, necessitating a thorough examination of their potential risks and associated mitigation strategies. This survey provides a comprehensive overview of the current landscape of LLM safety, covering four major categories: value misalignment, robustness to adversarial attacks, misuse, and autonomous AI risks. In addition to the comprehensive review of the mitigation methodologies and evaluation resources on these four aspects, we further explore four topics related to LLM safety: the safety implications of LLM agents, the role of interpretability in enhancing LLM safety, the technology roadmaps proposed and abided by a list of AI companies and institutes for LLM safety, and AI governance aimed at LLM safety with discussions on international cooperation, policy proposals, and prospective regulatory directions. Our findings underscore the necessity for a proactive, multifaceted approach to LLM safety, emphasizing the integration of technical solutions, ethical considerations, and robust governance frameworks. This survey is intended to serve as a foundational resource for academy researchers, industry practitioners, and policymakers, offering insights into the challenges and opportunities associated with the safe integration of LLMs into society. Ultimately, it seeks to contribute to the safe and beneficial development of LLMs, aligning with the overarching goal of harnessing AI for societal advancement and well-being. A curated list of related papers has been publicly available at https://github.com/tjunlp-lab/Awesome-LLM-Safety-Papers.
Build and Use Model
Training, fine-tuning, and integrating the AI model
Developer
Entity that creates, trains, or modifies the AI system
Manage
Prioritising, responding to, and mitigating AI risks