This page is still being polished. If you have thoughts, please share them via the feedback form.
Data on this page is preliminary and may change. Please do not share or cite these figures publicly.
Red teaming, capability evaluations, adversarial testing, and performance verification.
Also in Risk & Assurance
For GPT-4 (OpenAI, 2023a), OpenAI conducts internal quantitative evaluations following its content policies, e.g., evaluations on hate speech, self-injurious suggestions, and illegal suggestions. These evaluations measure the likelihood that GPT-4 generates content violating value alignment when given a prompt. Similarly, to ensure that Claude 3 (Anthropic, 2024a) is as safe as possible prior to deployment, Anthropic’s Trust and Safety team conducts a full multi-modal red teaming exercise to thoroughly evaluate Claude 3, including evaluations over trust and safety, bias, discrimination, and more. CAIS explores the relationship between AI safety and general upstream capabilities (e.g., general knowledge and reasoning), and finds that many safety benchmarks are highly correlated with upstream model capabilities, which can lead to “safety washing” (Ren et al., 2024b). In such cases, capability improvements are mischaracterized as safety progress. Based on these findings, they propose defining AI safety as a set of explicitly described research goals that are empirically separable from general capability progress, thus ensuring more accurate safety evaluation. The Allen Institute for AI develops WildTeaming (Jiang et al., 2024c), an automated red teaming framework for identifying and reproducing human attacks.WildTeaming directly exploits jailbreak strategies of human users and leverages those strategies to address vulnerabilities in LLMs.
Evaluations over value misalignment and robustness emphasize the importance of safety benchmark development. In this respect, a number of companies are committed to the development of multi-dimension, multi-domain safety evaluation benchmarks. Alibaba Cloud launches a “100 Bottles of Poison” for Chinese LLMs, developing a value alignment evaluation benchmark (Cvalues) (Xu et al., 2023b). In this benchmark, adversarial safety prompts in multiple categories are created under the guidance of experts from different domains, aiming to evaluate LLMs in terms of both safety and responsibility. Cohere releases the first multilingual human-labeled red team prompt datasets to distinguish between global and local harms (Aakanksha et al., 2024). These datasets are used to evaluate the robustness of different alignment techniques in the face of preference distributions across geographies and languages.
Reasoning
Testing and evaluation activity assessing model alignment and robustness through systematic evaluation mechanisms.
Evaluation
Value Misalignment
99.9 OtherValue Misalignment > Mitigating social bias
1 AI SystemValue Misalignment > Privacy protection
1 AI SystemValue Misalignment > Methods for mitigating toxicity
1 AI SystemValue Misalignment > Methods for mitigating LLM amorality
1 AI SystemRobustness to attack
1 AI SystemLarge Language Model Safety: A Holistic Survey
Shi, Dan; Shen, Tianhao; Huang, Yufei; Li, Zhigen; Leng, Yongqi; Jin, Renren; Liu, Chuang; Wu, Xinwei; Guo, Zishan; Yu, Linhao; Shi, Ling; Jiang, Bojian; Xiong, Deyi (2024)
The rapid development and deployment of large language models (LLMs) have introduced a new frontier in artificial intelligence, marked by unprecedented capabilities in natural language understanding and generation. However, the increasing integration of these models into critical applications raises substantial safety concerns, necessitating a thorough examination of their potential risks and associated mitigation strategies. This survey provides a comprehensive overview of the current landscape of LLM safety, covering four major categories: value misalignment, robustness to adversarial attacks, misuse, and autonomous AI risks. In addition to the comprehensive review of the mitigation methodologies and evaluation resources on these four aspects, we further explore four topics related to LLM safety: the safety implications of LLM agents, the role of interpretability in enhancing LLM safety, the technology roadmaps proposed and abided by a list of AI companies and institutes for LLM safety, and AI governance aimed at LLM safety with discussions on international cooperation, policy proposals, and prospective regulatory directions. Our findings underscore the necessity for a proactive, multifaceted approach to LLM safety, emphasizing the integration of technical solutions, ethical considerations, and robust governance frameworks. This survey is intended to serve as a foundational resource for academy researchers, industry practitioners, and policymakers, offering insights into the challenges and opportunities associated with the safe integration of LLMs into society. Ultimately, it seeks to contribute to the safe and beneficial development of LLMs, aligning with the overarching goal of harnessing AI for societal advancement and well-being. A curated list of related papers has been publicly available at https://github.com/tjunlp-lab/Awesome-LLM-Safety-Papers.
Verify and Validate
Testing, evaluating, auditing, and red-teaming the AI system
Developer
Entity that creates, trains, or modifies the AI system
Measure
Quantifying, testing, and monitoring identified AI risks