This page is still being polished. If you have thoughts, please share them via the feedback form.
Data on this page is preliminary and may change. Please do not share or cite these figures publicly.
Foundational safety research, theoretical understanding, and scientific inquiry informing AI development.
Also in Engineering & Development
A plenty of studies attempt to interpret and disclose the inner mechanisms underlying incontext learning (ICL) (Garg et al., 2022; Kossen et al., 2023; Ren et al., 2024a; Cho et al., 2024). We discuss one interpretability method for ICL along the “feature” research line (mentioned in Section 8.1.1). When an LLM learn to solve specific tasks through context, it can be conceptualized as a computation graph, where circuits are subgraphs composed of linked features and weights that connect them. Similar to how features are concepts’ representational primitive, circuits function as task’s computational primitive (Michaud et al., 2024).
Induction heads are a type of circuits within LLMs that are believed to be critical for enabling in-context learning abilities (Olsson et al., 2022). These circuits function by performing prefix matching and copying previously occurring sequences. An induction head consists of two attention heads working together: • Prefix-Matching Head: The first attention head, located in a previous layer of the model, attends to prior tokens that are followed by the current token. This means it scans the sequence for earlier instances where the current token appears immediately after certain tokens, effectively performing prefix matching. This process identifies the “attend-to” token, which is the token that follows the current token in those previous occurrences. • Copying Head (Induction Head): The second attention head, known as the induction head, takes the “attend-to” token identified by the first head and copies it, increasing its output logits. By boosting the likelihood of this token in the model’s output, the induction head extends the recognized sequence, effectively enabling the model to predict and generate sequences that mirror previously observed patterns. Through the collaboration of these two heads, induction heads enable LLMs to recognize and replicate patterns within the input sequence, which is a fundamental aspect of in-context learning. Elhage et al. (2021) find two types of circuits in the transformer: i) “query-key” (QK) circuits; ii) “output-value” (OV) circuits, which are crucial for knowledge retrieval and updating. The QK circuits play a crucial role in determining which previously learned token to copy information from. Conversely, the OV circuits determine how the current token influences the output logits.
Reasoning
Architectural design choice to structure model for interpretability of in-context learning mechanisms.
Value Misalignment
99.9 OtherValue Misalignment > Mitigating social bias
1 AI SystemValue Misalignment > Privacy protection
1 AI SystemValue Misalignment > Methods for mitigating toxicity
1 AI SystemValue Misalignment > Methods for mitigating LLM amorality
1 AI SystemRobustness to attack
1 AI SystemLarge Language Model Safety: A Holistic Survey
Shi, Dan; Shen, Tianhao; Huang, Yufei; Li, Zhigen; Leng, Yongqi; Jin, Renren; Liu, Chuang; Wu, Xinwei; Guo, Zishan; Yu, Linhao; Shi, Ling; Jiang, Bojian; Xiong, Deyi (2024)
The rapid development and deployment of large language models (LLMs) have introduced a new frontier in artificial intelligence, marked by unprecedented capabilities in natural language understanding and generation. However, the increasing integration of these models into critical applications raises substantial safety concerns, necessitating a thorough examination of their potential risks and associated mitigation strategies. This survey provides a comprehensive overview of the current landscape of LLM safety, covering four major categories: value misalignment, robustness to adversarial attacks, misuse, and autonomous AI risks. In addition to the comprehensive review of the mitigation methodologies and evaluation resources on these four aspects, we further explore four topics related to LLM safety: the safety implications of LLM agents, the role of interpretability in enhancing LLM safety, the technology roadmaps proposed and abided by a list of AI companies and institutes for LLM safety, and AI governance aimed at LLM safety with discussions on international cooperation, policy proposals, and prospective regulatory directions. Our findings underscore the necessity for a proactive, multifaceted approach to LLM safety, emphasizing the integration of technical solutions, ethical considerations, and robust governance frameworks. This survey is intended to serve as a foundational resource for academy researchers, industry practitioners, and policymakers, offering insights into the challenges and opportunities associated with the safe integration of LLMs into society. Ultimately, it seeks to contribute to the safe and beneficial development of LLMs, aligning with the overarching goal of harnessing AI for societal advancement and well-being. A curated list of related papers has been publicly available at https://github.com/tjunlp-lab/Awesome-LLM-Safety-Papers.
Build and Use Model
Training, fine-tuning, and integrating the AI model
Developer
Entity that creates, trains, or modifies the AI system
Measure
Quantifying, testing, and monitoring identified AI risks