This page is still being polished. If you have thoughts, please share them via the feedback form.
Data on this page is preliminary and may change. Please do not share or cite these figures publicly.
Modifications to training data composition, quality, and filtering that affect what the model learns.
Also in Model
Data quality is particularly important for both pre-training and post-training of LLMs. Due to the wide range of sources of training data used for LLMs, it is inevitable that there are redundancies, errors, and harmful contents in collected data. Such low-quality data not only affects the performance of LLMs, but also misguides LLMs to generate content that is not expected by humans, such as toxic, pornographic, and biased content. Therefore, data filtering is usually required to ensure the quality of data before training, where rule-based filtering methods and model-based content classifiers are commonly used.
GPT-4 (OpenAI, 2023a) identifies pornographic content in training data by combining a dictionary-based approach and a content classifier. Claude 3 (Anthropic, 2024a) integrates multiple data cleaning and filtering methods to improve data quality. Yi (Young et al., 2024) constructs a set of filters based on heuristic rules, keyword matching, and classifiers. Qwen (Bai et al., 2023; Yang et al., 2024a) develops a set of data preprocessing procedures in which humans work with models to perform data filtering by using models to score data content in combination with human review. In addition to this, some companies specify model development policies that require the training data of these models not to include user data in order to protect user privacy, e.g., Meta and Anthropic. While filtered training data reduces harmful content, filtering is still not sufficient for LLMs to learn to perform in a safe way. In order to further improve the safety of LLMs, additional instruction data of a safe nature is usually added to training data. Gemini (Anil et al., 2023) emphasizes the importance of adversarial query data in the post-training stage. It constructs a query dataset for post-training, which contains about 20 categories of harmful data by integrating expert authoring, model synthesis, and automated red teaming methods. Similarly, Yi (Young et al., 2024) constructs a comprehensive safety classification system, and then builds a safety dataset for SFT training based on this classification. It is worth noting that these safety datasets usually require human supervision or guidance to ensure their quality. For example, the safety datasets used in models such as Baichuan (Yang et al., 2023a) and TigerBot (Chen et al., 2023) are constructed under the guidance of relevant domain experts, which highlights the importance of safety datasets compared to general datasets.
Reasoning
Modifies training data composition and filtering to shape what model learns.
Training
Value Misalignment
99.9 OtherValue Misalignment > Mitigating social bias
1 AI SystemValue Misalignment > Privacy protection
1 AI SystemValue Misalignment > Methods for mitigating toxicity
1 AI SystemValue Misalignment > Methods for mitigating LLM amorality
1 AI SystemRobustness to attack
1 AI SystemLarge Language Model Safety: A Holistic Survey
Shi, Dan; Shen, Tianhao; Huang, Yufei; Li, Zhigen; Leng, Yongqi; Jin, Renren; Liu, Chuang; Wu, Xinwei; Guo, Zishan; Yu, Linhao; Shi, Ling; Jiang, Bojian; Xiong, Deyi (2024)
The rapid development and deployment of large language models (LLMs) have introduced a new frontier in artificial intelligence, marked by unprecedented capabilities in natural language understanding and generation. However, the increasing integration of these models into critical applications raises substantial safety concerns, necessitating a thorough examination of their potential risks and associated mitigation strategies. This survey provides a comprehensive overview of the current landscape of LLM safety, covering four major categories: value misalignment, robustness to adversarial attacks, misuse, and autonomous AI risks. In addition to the comprehensive review of the mitigation methodologies and evaluation resources on these four aspects, we further explore four topics related to LLM safety: the safety implications of LLM agents, the role of interpretability in enhancing LLM safety, the technology roadmaps proposed and abided by a list of AI companies and institutes for LLM safety, and AI governance aimed at LLM safety with discussions on international cooperation, policy proposals, and prospective regulatory directions. Our findings underscore the necessity for a proactive, multifaceted approach to LLM safety, emphasizing the integration of technical solutions, ethical considerations, and robust governance frameworks. This survey is intended to serve as a foundational resource for academy researchers, industry practitioners, and policymakers, offering insights into the challenges and opportunities associated with the safe integration of LLMs into society. Ultimately, it seeks to contribute to the safe and beneficial development of LLMs, aligning with the overarching goal of harnessing AI for societal advancement and well-being. A curated list of related papers has been publicly available at https://github.com/tjunlp-lab/Awesome-LLM-Safety-Papers.
Collect and Process Data
Gathering, curating, labelling, and preprocessing training data
Developer
Entity that creates, trains, or modifies the AI system
Manage
Prioritising, responding to, and mitigating AI risks
Primary
1 Discrimination & Toxicity