This page is still being polished. If you have thoughts, please share them via the feedback form.
Data on this page is preliminary and may change. Please do not share or cite these figures publicly.
Red teaming, capability evaluations, adversarial testing, and performance verification.
Also in Risk & Assurance
OpenAI conducts qualitative evaluations for GPT-4 in term of misuse. Over 50 experts from cybersecurity, biorisk, and international security are engaged to adversarially test GPT-4 and provide general feedbacks. The feedbacks gathered from these experts are used for subsequent mitigations and improvements to GPT-4. In addition, a non-conventional weapons proliferation evaluation on GPT-4 is also conducted, primarily to explore whether GPT-4 could provide the necessary information for proliferators seeking to develop, acquire or disperse nuclear, radiological, biological and chemical weapons (OpenAI, 2023a). For autonomy risks evaluation, OpenAI collaborates with Alignment Research Center (ARC) to evaluate GPT-4’s ability to autonomously replicate itself and acquire resources through expert red teaming. Although the initial evaluation demonstrates the ineffectiveness of GPT-4 in autonomously replicating itself and acquiring resources, ARC warns such risks for future advanced LLMs (OpenAI, 2023a). Anthropic proposes multiple levels of evaluation for catastrophic risks. In order to address the problem that a predetermined safety threshold for a given level may be accidentally exceeded when training LLMs, safety researchers in Anthropic set up safety buffers for each risk level (Anthropic, 2023b). The buffer strategy designs the evaluation of each risk level to be triggered at a level slightly below the level of capability of that level, while setting the buffer size to be larger than the evaluation time interval. In this way, the likelihood of accidentally crossing safety boundaries due to rapid increases in model capability is reduced, thus providing more time for researchers and developers to prepare and implement appropriate safety measures Similarly, Google DeepMind develops an early warning evaluation to periodically test the capabilities of frontier models to check if they are approaching critical capability levels (DeepMind, 2024). Model Evaluation and Threat Research (METR), formerly known as ARC Eval, is dedicated to evaluating whether advanced AI systems pose a catastrophic risk to society, and is now focusing on evaluating the autonomy of LLMs. It argues that unlike the ability to develop biological weapons or execute high-value cyberattacks, having autonomy does not directly enable AI systems to cause catastrophic adverse consequences. But autonomy is a measure of the extent to which an AI system can have a profound impact on the world with minimal human involvement, and such a metric is useful in a variety of threat models. Following this, METR has recently released autonomy evaluation resources that include a task suite, software tools, and guidelines for accurately measuring LLM capabilities (METR, 2024). Apollo Research focuses on the evaluation of strategic deception. It finds that under varying levels of stress, GPT-4 engages in illegal behavior such as insider trading and lying about its actions (Scheurer et al., 2023). This finding demonstrates that AI systems may adopt strategies that humans do not approve of in order to help themselves.
Evaluation
Value Misalignment
99.9 OtherValue Misalignment > Mitigating social bias
1 AI SystemValue Misalignment > Privacy protection
1 AI SystemValue Misalignment > Methods for mitigating toxicity
1 AI SystemValue Misalignment > Methods for mitigating LLM amorality
1 AI SystemRobustness to attack
1 AI SystemLarge Language Model Safety: A Holistic Survey
Shi, Dan; Shen, Tianhao; Huang, Yufei; Li, Zhigen; Leng, Yongqi; Jin, Renren; Liu, Chuang; Wu, Xinwei; Guo, Zishan; Yu, Linhao; Shi, Ling; Jiang, Bojian; Xiong, Deyi (2024)
The rapid development and deployment of large language models (LLMs) have introduced a new frontier in artificial intelligence, marked by unprecedented capabilities in natural language understanding and generation. However, the increasing integration of these models into critical applications raises substantial safety concerns, necessitating a thorough examination of their potential risks and associated mitigation strategies. This survey provides a comprehensive overview of the current landscape of LLM safety, covering four major categories: value misalignment, robustness to adversarial attacks, misuse, and autonomous AI risks. In addition to the comprehensive review of the mitigation methodologies and evaluation resources on these four aspects, we further explore four topics related to LLM safety: the safety implications of LLM agents, the role of interpretability in enhancing LLM safety, the technology roadmaps proposed and abided by a list of AI companies and institutes for LLM safety, and AI governance aimed at LLM safety with discussions on international cooperation, policy proposals, and prospective regulatory directions. Our findings underscore the necessity for a proactive, multifaceted approach to LLM safety, emphasizing the integration of technical solutions, ethical considerations, and robust governance frameworks. This survey is intended to serve as a foundational resource for academy researchers, industry practitioners, and policymakers, offering insights into the challenges and opportunities associated with the safe integration of LLMs into society. Ultimately, it seeks to contribute to the safe and beneficial development of LLMs, aligning with the overarching goal of harnessing AI for societal advancement and well-being. A curated list of related papers has been publicly available at https://github.com/tjunlp-lab/Awesome-LLM-Safety-Papers.
Verify and Validate
Testing, evaluating, auditing, and red-teaming the AI system
Developer
Entity that creates, trains, or modifies the AI system
Measure
Quantifying, testing, and monitoring identified AI risks