This page is still being polished. If you have thoughts, please share them via the feedback form.
Data on this page is preliminary and may change. Please do not share or cite these figures publicly.
Governance frameworks, formal policies, and strategic alignment mechanisms.
Also in Oversight & Accountability
In order to address LLM safety more systematically, many companies and research institutes publish their own safety guidance strategies, which are used to provide theoretical and technical guidance throughout the entire lifecycle, including model development and deployment.
OpenAI develops a Preparedness Framework (OpenAI, 2023b), describing OpenAI’s process for tracking, evaluating, forecasting, and protecting against the catastrophic risks posed by increasingly powerful models. The framework categorizes risk levels as Low, Medium, High, and Critical. The framework tracks risks such as Cybersecurity, Chemical, Biological, Nuclear, and Radiological (CBRN) threats, Persuasion, and Model Autonomy. Claude’s safety team proposes Responsible Scaling Policy (RSP) (Anthropic, 2023b), a framework for assessing and mitigating potentially catastrophic risks of AI models. RSP defines a concept referred to as AI safety levels (ASL) for catastrophic risks. For Claude 3 (Anthropic, 2024a), three sources of potential catastrophic risks have been evaluated: biological capabilities, cyber capabilities, and autonomous replication and adaptation (ARA) capabilities. Evaluation results show that Claude 3 is at the ASL-2 level, indicating that the model shows early indications of hazardous capabilities, but the information is not yet useful because it is not sufficiently reliable or does not provide information that is not available from search engines. Google DeepMind proposes a frontier safety framework that aims to address the serious risks that may arise from the powerful functionality of future AI models (DeepMind, 2024). The framework proposes two mitigations to address the safety issues of models with critical functionality, which are security mitigations to prevent leakage of model weights, and deployment mitigations to manage access to critical functionality. In addition to this, the framework also specifies protocols for the detection of capability levels at which models may pose severe risks (Critical Capability Levels, CCLs), addressing four categories of risks: Autonomy, Biosecurity, Cybersecurity, and Machine Learning R&D. Refer to paper (pg 78-79) for more
Value Misalignment
99.9 OtherValue Misalignment > Mitigating social bias
1 AI SystemValue Misalignment > Privacy protection
1 AI SystemValue Misalignment > Methods for mitigating toxicity
1 AI SystemValue Misalignment > Methods for mitigating LLM amorality
1 AI SystemRobustness to attack
1 AI SystemLarge Language Model Safety: A Holistic Survey
Shi, Dan; Shen, Tianhao; Huang, Yufei; Li, Zhigen; Leng, Yongqi; Jin, Renren; Liu, Chuang; Wu, Xinwei; Guo, Zishan; Yu, Linhao; Shi, Ling; Jiang, Bojian; Xiong, Deyi (2024)
The rapid development and deployment of large language models (LLMs) have introduced a new frontier in artificial intelligence, marked by unprecedented capabilities in natural language understanding and generation. However, the increasing integration of these models into critical applications raises substantial safety concerns, necessitating a thorough examination of their potential risks and associated mitigation strategies. This survey provides a comprehensive overview of the current landscape of LLM safety, covering four major categories: value misalignment, robustness to adversarial attacks, misuse, and autonomous AI risks. In addition to the comprehensive review of the mitigation methodologies and evaluation resources on these four aspects, we further explore four topics related to LLM safety: the safety implications of LLM agents, the role of interpretability in enhancing LLM safety, the technology roadmaps proposed and abided by a list of AI companies and institutes for LLM safety, and AI governance aimed at LLM safety with discussions on international cooperation, policy proposals, and prospective regulatory directions. Our findings underscore the necessity for a proactive, multifaceted approach to LLM safety, emphasizing the integration of technical solutions, ethical considerations, and robust governance frameworks. This survey is intended to serve as a foundational resource for academy researchers, industry practitioners, and policymakers, offering insights into the challenges and opportunities associated with the safe integration of LLMs into society. Ultimately, it seeks to contribute to the safe and beneficial development of LLMs, aligning with the overarching goal of harnessing AI for societal advancement and well-being. A curated list of related papers has been publicly available at https://github.com/tjunlp-lab/Awesome-LLM-Safety-Papers.
Other (multiple stages)
Applies across multiple lifecycle stages
Other (multiple actors)
Applies across multiple actor types
Govern
Policies, processes, and accountability structures for AI risk management