This page is still being polished. If you have thoughts, please share them via the feedback form.
Data on this page is preliminary and may change. Please do not share or cite these figures publicly.
Input validation, output filtering, and content moderation classifiers.
Also in Non-Model
External safeguard focuses on protecting an LLM from malicious inputs or external threats by implementing safety measures outside the model. A number of studies propose external safeguard methods to detect unsafe content in the input and output of the given LLM.
Kumar et al. (2023) wipe out tokens one by one for a given prompt, and then if any resulting subsequence or the prompt itself is detected as harmful, the input will be marked as harmful. Xie et al. (2024) propose GradSafe, which effectively detects jailbreak prompts by carefully checking the gradient of key parameters in LLM. Cao et al. (2023) build a robustly aligned LLM (RALLM) to defend against potential alignment-breaking attacks. Zeng et al. (2024d) assign different roles to LLM agents and use their collaborative monitoring and filtering of responses generated by LLMs to complete defense tasks. Other methods present various techniques for attacks including undesired content, such as classification system (Markov et al., 2023), token-level detection algorithms (Hu et al., 2023), toolkit with programmable rails (Rebedea et al., 2023), and LLM-based input-output safeguard model (Inan et al., 2023). Another strand of research starts from providing an instructive prompt or modifying a given prompt. Xie et al. (2023) provide instructions to guide LLMs to self-check and respond responsibly. Wei et al. (2023b) provide several examples of safe response to encourage safer outputs from LLMs. Xiong et al. (2024) use well-designed interpretable suffix prompts, which can effectively defend various standard and adaptive jailbreak techniques. Yi et al. (2023) add a reminder to the prompt fed to an LLM, which instructs the LLM not to execute commands embedded in external content, hence avoiding the execution of malicious instructions hidden in external content. Given the response generated by a target LLM from its original input prompt, Wang et al. (2024b) asks an LLM to infer the input prompt that may cause the response through using a “backtranslation” prompt. The inferred prompt is called a backtranslation prompt, which tends to reveal the actual intention of the original prompt. Robey et al. (2023) randomly interfere a given input prompt, generate multiple copies, and then aggregate/summarize corresponding responses to detect the adversarial input. External threats may employ a variety of attack strategies, making them difficult to predict and guard against. It is challenging to identify highly covert attacks or malicious input and output. And too much reliance on external safeguard may lead to the safety of the whole system is weak. If the external system crashes, the overall safety of the protected LLM would be greatly reduced.
Reasoning
Mitigation name and evidence placeholders prevent identifying focal activity or implementation location.
Attack defense
Value Misalignment
99.9 OtherValue Misalignment > Mitigating social bias
1 AI SystemValue Misalignment > Privacy protection
1 AI SystemValue Misalignment > Methods for mitigating toxicity
1 AI SystemValue Misalignment > Methods for mitigating LLM amorality
1 AI SystemRobustness to attack
1 AI SystemLarge Language Model Safety: A Holistic Survey
Shi, Dan; Shen, Tianhao; Huang, Yufei; Li, Zhigen; Leng, Yongqi; Jin, Renren; Liu, Chuang; Wu, Xinwei; Guo, Zishan; Yu, Linhao; Shi, Ling; Jiang, Bojian; Xiong, Deyi (2024)
The rapid development and deployment of large language models (LLMs) have introduced a new frontier in artificial intelligence, marked by unprecedented capabilities in natural language understanding and generation. However, the increasing integration of these models into critical applications raises substantial safety concerns, necessitating a thorough examination of their potential risks and associated mitigation strategies. This survey provides a comprehensive overview of the current landscape of LLM safety, covering four major categories: value misalignment, robustness to adversarial attacks, misuse, and autonomous AI risks. In addition to the comprehensive review of the mitigation methodologies and evaluation resources on these four aspects, we further explore four topics related to LLM safety: the safety implications of LLM agents, the role of interpretability in enhancing LLM safety, the technology roadmaps proposed and abided by a list of AI companies and institutes for LLM safety, and AI governance aimed at LLM safety with discussions on international cooperation, policy proposals, and prospective regulatory directions. Our findings underscore the necessity for a proactive, multifaceted approach to LLM safety, emphasizing the integration of technical solutions, ethical considerations, and robust governance frameworks. This survey is intended to serve as a foundational resource for academy researchers, industry practitioners, and policymakers, offering insights into the challenges and opportunities associated with the safe integration of LLMs into society. Ultimately, it seeks to contribute to the safe and beneficial development of LLMs, aligning with the overarching goal of harnessing AI for societal advancement and well-being. A curated list of related papers has been publicly available at https://github.com/tjunlp-lab/Awesome-LLM-Safety-Papers.
Operate and Monitor
Running, maintaining, and monitoring the AI system post-deployment
Deployer
Entity that integrates and deploys the AI system for end users
Manage
Prioritising, responding to, and mitigating AI risks
Other