This page is still being polished. If you have thoughts, please share them via the feedback form.
Data on this page is preliminary and may change. Please do not share or cite these figures publicly.
Foundational safety research, theoretical understanding, and scientific inquiry informing AI development.
Also in Engineering & Development
The formation and storage of concepts in LLMs rely on neurons, attention heads, and their complex interactions
Concepts encoded by neural networks are usually referred to feature (Olah et al., 2020). For example, one or a group of neurons consistently activating in French text can be interpreted as a “French text detector” feature (Gurnee et al., 2023). Neurons are the basic units in LLMs for memorizing patterns, potentially representing individual features. A neuron corresponding to a single semantic concept is monosemantic, implying a one-to-one relationship between neurons and features. However, for transformer models, neurons are often observed to be polysemantic, i.e., being activated on multiple unrelated concepts (Elhage et al., 2022). Gurnee & Tegmark (2024) show that the shallow layers tend to represent many low-level features in superposition, while middle layers include dedicated neurons to represent high-level features. Sparse auto-encoders (SAEs) have been recently used to disentangle superposition to reach a monosemantic understanding, e.g., through the method dictionary learning where features are predefined (Sharkey et al., 2023). Anthropic and OpenAI have implemented visual explanations of features based on SAE (Templeton, 2024; Gao et al., 2024), such as the visualization of so-called Golden Gate Bridge feature. More valuable is the observation of features related to a wide range of safety issues, including deception, sycophancy, bias, and dangerous content. These identified features can be used to manipulate the output of LLMs (Bricken et al., 2023).
Reasoning
Foundational research investigating how LLM models internally represent and store learned concepts.
Interpretability for LLM abilities: in-context learning
A plenty of studies attempt to interpret and disclose the inner mechanisms underlying incontext learning (ICL) (Garg et al., 2022; Kossen et al., 2023; Ren et al., 2024a; Cho et al., 2024). We discuss one interpretability method for ICL along the “feature” research line (mentioned in Section 8.1.1). When an LLM learn to solve specific tasks through context, it can be conceptualized as a computation graph, where circuits are subgraphs composed of linked features and weights that connect them. Similar to how features are concepts’ representational primitive, circuits function as task’s computational primitive (Michaud et al., 2024).
2.4.1 Research & FoundationsInterpretability for LLM abilities: generalization and emergence of abilities
Recent studies argue that LLMs exhibit emergent abilities, which are absent in smaller models but present in larger-scale models (Schaeffer et al., 2023). Existing research (Power et al., 2022; Doshi et al., 2024), by observing the dynamic training process of models, has identified two important phenomena related to generalization and emergence: grokking and memorization.
2.4.1 Research & FoundationsInterpretability in model safety auditing
As the application of LLMs in high-stake domains such as healthcare, finance, and law continues to increase, it is imperative to not only assess their accuracy but also scrutinize their safety and reliability (Li et al., 2023e; Liu et al., 2023b). From a societal perspective, the widespread adoption of LLMs across various domains presents potential risks. These risks could arise from a disconnect between LLM developers and users. The former often prioritize technological advancements over practical applications, while the latter may introduce LLMs into their fields without sufficient safety measures or proven success replication. Therefore, Mökander et al. (2023) have proposed that model safety should be audited by third-party entities to rapidly identify risks within LLM systems and issue safety alerts. The auditing process for model safety comprises three steps: • Governance Audit: This involves evaluating the design and dissemination of LLMs to ensure their compliance with relevant legal and ethical standards. • Model Review: This step entails a thorough examination of the LLMs themselves, including aspects such as performance, safety, and fairness. • Application Review: This involves assessing applications based on LLMs to ensure their reliability and safety in practical use.
2.2.3 Auditing & ComplianceValue Misalignment
99.9 OtherValue Misalignment > Mitigating social bias
1 AI SystemValue Misalignment > Privacy protection
1 AI SystemValue Misalignment > Methods for mitigating toxicity
1 AI SystemValue Misalignment > Methods for mitigating LLM amorality
1 AI SystemRobustness to attack
1 AI SystemLarge Language Model Safety: A Holistic Survey
Shi, Dan; Shen, Tianhao; Huang, Yufei; Li, Zhigen; Leng, Yongqi; Jin, Renren; Liu, Chuang; Wu, Xinwei; Guo, Zishan; Yu, Linhao; Shi, Ling; Jiang, Bojian; Xiong, Deyi (2024)
The rapid development and deployment of large language models (LLMs) have introduced a new frontier in artificial intelligence, marked by unprecedented capabilities in natural language understanding and generation. However, the increasing integration of these models into critical applications raises substantial safety concerns, necessitating a thorough examination of their potential risks and associated mitigation strategies. This survey provides a comprehensive overview of the current landscape of LLM safety, covering four major categories: value misalignment, robustness to adversarial attacks, misuse, and autonomous AI risks. In addition to the comprehensive review of the mitigation methodologies and evaluation resources on these four aspects, we further explore four topics related to LLM safety: the safety implications of LLM agents, the role of interpretability in enhancing LLM safety, the technology roadmaps proposed and abided by a list of AI companies and institutes for LLM safety, and AI governance aimed at LLM safety with discussions on international cooperation, policy proposals, and prospective regulatory directions. Our findings underscore the necessity for a proactive, multifaceted approach to LLM safety, emphasizing the integration of technical solutions, ethical considerations, and robust governance frameworks. This survey is intended to serve as a foundational resource for academy researchers, industry practitioners, and policymakers, offering insights into the challenges and opportunities associated with the safe integration of LLMs into society. Ultimately, it seeks to contribute to the safe and beneficial development of LLMs, aligning with the overarching goal of harnessing AI for societal advancement and well-being. A curated list of related papers has been publicly available at https://github.com/tjunlp-lab/Awesome-LLM-Safety-Papers.
Build and Use Model
Training, fine-tuning, and integrating the AI model
Developer
Entity that creates, trains, or modifies the AI system
Map
Identifying and documenting AI risks, contexts, and impacts