This page is still being polished. If you have thoughts, please share them via the feedback form.
Data on this page is preliminary and may change. Please do not share or cite these figures publicly.
Runtime behavior observation, anomaly detection, and activity logging.
Also in Non-Model
In order to monitor the risks that may occur during the interaction between LLMs and users, researchers develop a variety of monitoring tools for scrutinizing LLMs inputs and outputs and predicting risks
GPT-4 (OpenAI, 2023a) uses a detection system combining machine learning and rule-based classifiers to identify content that may violate their usage policies. When such content is identified, the deployed monitoring system takes defensive measures such as issuing warnings, temporarily suspending, or in severe cases, banning the corresponding users. Similarly, Claude 3 (Anthropic, 2024a) has a content classifier that identifies any content that violates the Acceptable Use Policy (AUP) (Anthropic, 2024b). User prompts that are flagged as violating AUP trigger a command for Claude to respond more carefully. In the case of particularly serious or harmful user prompts, Claude 3 is prevented from responding at all. In the case of multiple violations, the Claude is terminated. It is important to note that these classifiers need to be updated regularly to address the changing threat landscape. ERNIEBot (Sun et al., 2021) deploys a content review system that intervenes on LLM inputs by means of manual review or rule-based filtering to ensure that LLM inputs conform to a specific standard or specification. On the output side, after filtering out harmful and sensitive words in an LLM-generated response through the content review system, the safety content of the response is used as the final output by means of semantic rewriting. Garak (Derczynski et al., 2023) is a vulnerability scanner for LLMs, which checks models for hundreds of different known weaknesses using thousands of different prompts, and checks model responses to see if the model is at risk in some way. In contrast to the private monitoring systems mentioned above, there are a number of open-source monitoring tools. Perspective API (Jigsaw, 2021) identifies offensive, rude, discriminatory, and other toxic content in online conversations. WildGuard (Han et al., 2024) evaluates the safety of user interactions with LLMs through three safety audit tasks, including Harmfulness of Prompt, Harmfulness of Response, and Rejection of Response. Llama Guard (Inan et al., 2023) is used to detect whether the input prompts and output responses generated by LLMs violate predefined safety categories. Llama 3 (Dubey et al., 2024) uses two prompt-based filtering mechanisms, Prompt Guard and Code Shield. Prompt Guard is used to detect prompting attacks, which are mainly two types of attacks, direct jailbreaks and indirect prompt injections. Code Shield is able to detect the generation of unsafe code before it can potentially enter a downstream use case (e.g., a production system), and is able to support seven programming languages. While these monitoring tools can assist humans in monitoring risky content generated by LLMs, the robustness of the tools themselves remains an issue. Previous research has found evidence of bias in the Perspective API, e.g., giving higher toxicity scores to text containing racial or gender identity terms or phrases associated with African American English (Sap et al., 2019). In the context of a content-moderation tool, these kinds of biases can cause real harm, as they may lead to suppression of speech within or about marginalized communities. Therefore, it is also important to review these widely recognized monitoring tools. IndieLabel (Risk & Alliance, 2024), a detection tool for the Perspective API, prompts users to assign a toxicity score to a small number of text examples (about 20 social media posts). A lightweight model is then trained to predict the user’s perception of a large number of examples (roughly thousands of social media posts), and these predictions are used to uncover areas of potential disagreement between users and the Perspective API. Similarly, BELLS (Dorn et al., 2024a) is a framework for evaluating the reliability and generalizability of LLMs monitoring systems, which allows for a comparison of the reliability of multiple monitoring tools, thus creating a performance competition in anomaly detection.
Reasoning
Mitigation name "Monitoring" lacks definition and evidence; cannot identify focal activity or location.
Deployment
Value Misalignment
99.9 OtherValue Misalignment > Mitigating social bias
1 AI SystemValue Misalignment > Privacy protection
1 AI SystemValue Misalignment > Methods for mitigating toxicity
1 AI SystemValue Misalignment > Methods for mitigating LLM amorality
1 AI SystemRobustness to attack
1 AI SystemLarge Language Model Safety: A Holistic Survey
Shi, Dan; Shen, Tianhao; Huang, Yufei; Li, Zhigen; Leng, Yongqi; Jin, Renren; Liu, Chuang; Wu, Xinwei; Guo, Zishan; Yu, Linhao; Shi, Ling; Jiang, Bojian; Xiong, Deyi (2024)
The rapid development and deployment of large language models (LLMs) have introduced a new frontier in artificial intelligence, marked by unprecedented capabilities in natural language understanding and generation. However, the increasing integration of these models into critical applications raises substantial safety concerns, necessitating a thorough examination of their potential risks and associated mitigation strategies. This survey provides a comprehensive overview of the current landscape of LLM safety, covering four major categories: value misalignment, robustness to adversarial attacks, misuse, and autonomous AI risks. In addition to the comprehensive review of the mitigation methodologies and evaluation resources on these four aspects, we further explore four topics related to LLM safety: the safety implications of LLM agents, the role of interpretability in enhancing LLM safety, the technology roadmaps proposed and abided by a list of AI companies and institutes for LLM safety, and AI governance aimed at LLM safety with discussions on international cooperation, policy proposals, and prospective regulatory directions. Our findings underscore the necessity for a proactive, multifaceted approach to LLM safety, emphasizing the integration of technical solutions, ethical considerations, and robust governance frameworks. This survey is intended to serve as a foundational resource for academy researchers, industry practitioners, and policymakers, offering insights into the challenges and opportunities associated with the safe integration of LLMs into society. Ultimately, it seeks to contribute to the safe and beneficial development of LLMs, aligning with the overarching goal of harnessing AI for societal advancement and well-being. A curated list of related papers has been publicly available at https://github.com/tjunlp-lab/Awesome-LLM-Safety-Papers.
Operate and Monitor
Running, maintaining, and monitoring the AI system post-deployment
Deployer
Entity that integrates and deploys the AI system for end users
Measure
Quantifying, testing, and monitoring identified AI risks
Primary
4 Malicious Actors & Misuse