This page is still being polished. If you have thoughts, please share them via the feedback form.
Data on this page is preliminary and may change. Please do not share or cite these figures publicly.
Foundational safety research, theoretical understanding, and scientific inquiry informing AI development.
Also in Engineering & Development
Recent studies argue that LLMs exhibit emergent abilities, which are absent in smaller models but present in larger-scale models (Schaeffer et al., 2023). Existing research (Power et al., 2022; Doshi et al., 2024), by observing the dynamic training process of models, has identified two important phenomena related to generalization and emergence: grokking and memorization.
Grokking is a phenomenon observed in over-parameterized neural networks, where models that have severely overfitted training data suddenly and significantly improve their validation accuracy. Grokking is closely related to factors such as data, representations, and regularization. Larger datasets can decrease the number of steps needed for grokking to occur (Zhu et al., 2024). Well-structured embeddings and regularization measures can accelerate the onset of grokking, with weight decay standing out as particularly effective in strengthening generalization capabilities (Liu et al., 2022). Recent studies have demonstrated that as the scale of models increases, more capabilities are acquired, such as more precise spatial and temporal representations (Schaeffer et al., 2023; Gurnee & Tegmark, 2024). Memorization often refers to the phenomenon that models predict with statistical features rather than causal relations. Nanda et al. (2023) hypothesize that memorization constitutes a phase of grokking. They split training into three continuous phases: memorization, circuit formation, and cleanup. Their experiment results show that grokking, rather than being a sudden shift, arises from the gradual amplification of structured mechanisms encoded in the weights, followed by the later removal of memorizing components.
Reasoning
Foundational research investigating how LLM abilities generalize and emerge, informing model development understanding.
Value Misalignment
99.9 OtherValue Misalignment > Mitigating social bias
1 AI SystemValue Misalignment > Privacy protection
1 AI SystemValue Misalignment > Methods for mitigating toxicity
1 AI SystemValue Misalignment > Methods for mitigating LLM amorality
1 AI SystemRobustness to attack
1 AI SystemLarge Language Model Safety: A Holistic Survey
Shi, Dan; Shen, Tianhao; Huang, Yufei; Li, Zhigen; Leng, Yongqi; Jin, Renren; Liu, Chuang; Wu, Xinwei; Guo, Zishan; Yu, Linhao; Shi, Ling; Jiang, Bojian; Xiong, Deyi (2024)
The rapid development and deployment of large language models (LLMs) have introduced a new frontier in artificial intelligence, marked by unprecedented capabilities in natural language understanding and generation. However, the increasing integration of these models into critical applications raises substantial safety concerns, necessitating a thorough examination of their potential risks and associated mitigation strategies. This survey provides a comprehensive overview of the current landscape of LLM safety, covering four major categories: value misalignment, robustness to adversarial attacks, misuse, and autonomous AI risks. In addition to the comprehensive review of the mitigation methodologies and evaluation resources on these four aspects, we further explore four topics related to LLM safety: the safety implications of LLM agents, the role of interpretability in enhancing LLM safety, the technology roadmaps proposed and abided by a list of AI companies and institutes for LLM safety, and AI governance aimed at LLM safety with discussions on international cooperation, policy proposals, and prospective regulatory directions. Our findings underscore the necessity for a proactive, multifaceted approach to LLM safety, emphasizing the integration of technical solutions, ethical considerations, and robust governance frameworks. This survey is intended to serve as a foundational resource for academy researchers, industry practitioners, and policymakers, offering insights into the challenges and opportunities associated with the safe integration of LLMs into society. Ultimately, it seeks to contribute to the safe and beneficial development of LLMs, aligning with the overarching goal of harnessing AI for societal advancement and well-being. A curated list of related papers has been publicly available at https://github.com/tjunlp-lab/Awesome-LLM-Safety-Papers.
Build and Use Model
Training, fine-tuning, and integrating the AI model
Developer
Entity that creates, trains, or modifies the AI system
Measure
Quantifying, testing, and monitoring identified AI risks