This page is still being polished. If you have thoughts, please share them via the feedback form.
Data on this page is preliminary and may change. Please do not share or cite these figures publicly.
Input validation, output filtering, and content moderation classifiers.
Also in Non-Model
Guardrails are a set of programmable constraints or rules that sit between users and LLMs (Rebedea et al., 2023; Jiang et al., 2023; Databricks, 2024b). These guardrails monitor, influence, and instruct the interaction of LLMs with users, usually by setting system prompts in LLM’s front-end applications, which force LLMs to enforce the output constraints. For example, models are required through system prompts to help users in a caring, respectful, and honest manner; to avoid harmful, unethical, biased, or negative content; and to ensure that responses promote fairness and positivity. Thereby, LLM outputs are restricted within these guardrails to ensure their safety. Currently, the guardrail technology is widely used by many AI companies, such as NVIDIA, Mistral AI, and Databricks.
NeMo Guardrails (Rebedea et al., 2023) is an open source toolkit released by NVIDIA to add programmable guardrails to LLM-based session systems. NeMo Guardrails supports three types of guardrails: topical guardrails, safety guardrails, and security guardrails. Topical guardrails are designed to ensure that conversations focus on specific topics and prevent them from straying into undesirable areas. Safety guardrail ensures that interactions with LLMs do not result in misinformation, malicious responses, or inappropriate content. Security guardrails prevent LLMs from executing malicious code or calling external applications in ways that pose security risks.
Reasoning
Output filtering classifier blocks harmful content before user delivery.
Deployment
Value Misalignment
99.9 OtherValue Misalignment > Mitigating social bias
1 AI SystemValue Misalignment > Privacy protection
1 AI SystemValue Misalignment > Methods for mitigating toxicity
1 AI SystemValue Misalignment > Methods for mitigating LLM amorality
1 AI SystemRobustness to attack
1 AI SystemLarge Language Model Safety: A Holistic Survey
Shi, Dan; Shen, Tianhao; Huang, Yufei; Li, Zhigen; Leng, Yongqi; Jin, Renren; Liu, Chuang; Wu, Xinwei; Guo, Zishan; Yu, Linhao; Shi, Ling; Jiang, Bojian; Xiong, Deyi (2024)
The rapid development and deployment of large language models (LLMs) have introduced a new frontier in artificial intelligence, marked by unprecedented capabilities in natural language understanding and generation. However, the increasing integration of these models into critical applications raises substantial safety concerns, necessitating a thorough examination of their potential risks and associated mitigation strategies. This survey provides a comprehensive overview of the current landscape of LLM safety, covering four major categories: value misalignment, robustness to adversarial attacks, misuse, and autonomous AI risks. In addition to the comprehensive review of the mitigation methodologies and evaluation resources on these four aspects, we further explore four topics related to LLM safety: the safety implications of LLM agents, the role of interpretability in enhancing LLM safety, the technology roadmaps proposed and abided by a list of AI companies and institutes for LLM safety, and AI governance aimed at LLM safety with discussions on international cooperation, policy proposals, and prospective regulatory directions. Our findings underscore the necessity for a proactive, multifaceted approach to LLM safety, emphasizing the integration of technical solutions, ethical considerations, and robust governance frameworks. This survey is intended to serve as a foundational resource for academy researchers, industry practitioners, and policymakers, offering insights into the challenges and opportunities associated with the safe integration of LLMs into society. Ultimately, it seeks to contribute to the safe and beneficial development of LLMs, aligning with the overarching goal of harnessing AI for societal advancement and well-being. A curated list of related papers has been publicly available at https://github.com/tjunlp-lab/Awesome-LLM-Safety-Papers.
Operate and Monitor
Running, maintaining, and monitoring the AI system post-deployment
Deployer
Entity that integrates and deploys the AI system for end users
Manage
Prioritising, responding to, and mitigating AI risks
Primary
1 Discrimination & Toxicity