This page is still being polished. If you have thoughts, please share them via the feedback form.
Data on this page is preliminary and may change. Please do not share or cite these figures publicly.
Output attribution, content watermarking, and AI detection mechanisms.
Also in Non-Model
Addressing the spread of LLM-generated misinformation requires a dual approach: first, by detecting the source of the text, i.e., whether it is manually authored or machine-generated; and second, by verifying whether the generated content is factual.
Firstly, the capacity to detect and audit machine-generated text is fundamental to preventing the misuse of LLMs to create and disseminate misinformation. If it becomes possible to reliably identify all generated text as being produced by LLMs, this would enable the clear detection of LLM-generated fake posts, fake news, and other misleading content. It might even be possible to radically prevent malicious actors from using LLM to generate and disseminate malicious or misleading content on social media, as LLM-generated text would no longer be indistinguishable from human-written text. Several studies have focused on training classifiers to discriminate between human-generated text samples and text samples generated by LLMs (Mitrovic et al., 2023). Additionally, other researchers leverage the perplexities of generated texts for detection, which assumes lower perplexity in AI-generated text (Gehrmann et al., 2019; Schuster et al., 2020; Fröhling & Zubiaga, 2021; Mitchell et al., 2023). Recently, some prevailing approaches have achieved detection through model watermarking, i.e., adding subtle patterns to the text, which are imperceptible to humans but allow for the recognition of synthetic content (Zhao et al., 2023; Kirchenbauer et al., 2023; Lee et al., 2023; Wang et al., 2023b; Yoo et al., 2023; Kirchenbauer et al., 2024; Liu et al., 2024a;b).
Reasoning
Mitigation name lacks definition and evidence; insufficient detail to identify focal activity or implementation mechanism.
Misuse
Value Misalignment
99.9 OtherValue Misalignment > Mitigating social bias
1 AI SystemValue Misalignment > Privacy protection
1 AI SystemValue Misalignment > Methods for mitigating toxicity
1 AI SystemValue Misalignment > Methods for mitigating LLM amorality
1 AI SystemRobustness to attack
1 AI SystemLarge Language Model Safety: A Holistic Survey
Shi, Dan; Shen, Tianhao; Huang, Yufei; Li, Zhigen; Leng, Yongqi; Jin, Renren; Liu, Chuang; Wu, Xinwei; Guo, Zishan; Yu, Linhao; Shi, Ling; Jiang, Bojian; Xiong, Deyi (2024)
The rapid development and deployment of large language models (LLMs) have introduced a new frontier in artificial intelligence, marked by unprecedented capabilities in natural language understanding and generation. However, the increasing integration of these models into critical applications raises substantial safety concerns, necessitating a thorough examination of their potential risks and associated mitigation strategies. This survey provides a comprehensive overview of the current landscape of LLM safety, covering four major categories: value misalignment, robustness to adversarial attacks, misuse, and autonomous AI risks. In addition to the comprehensive review of the mitigation methodologies and evaluation resources on these four aspects, we further explore four topics related to LLM safety: the safety implications of LLM agents, the role of interpretability in enhancing LLM safety, the technology roadmaps proposed and abided by a list of AI companies and institutes for LLM safety, and AI governance aimed at LLM safety with discussions on international cooperation, policy proposals, and prospective regulatory directions. Our findings underscore the necessity for a proactive, multifaceted approach to LLM safety, emphasizing the integration of technical solutions, ethical considerations, and robust governance frameworks. This survey is intended to serve as a foundational resource for academy researchers, industry practitioners, and policymakers, offering insights into the challenges and opportunities associated with the safe integration of LLMs into society. Ultimately, it seeks to contribute to the safe and beneficial development of LLMs, aligning with the overarching goal of harnessing AI for societal advancement and well-being. A curated list of related papers has been publicly available at https://github.com/tjunlp-lab/Awesome-LLM-Safety-Papers.
Operate and Monitor
Running, maintaining, and monitoring the AI system post-deployment
Developer
Entity that creates, trains, or modifies the AI system
Manage
Prioritising, responding to, and mitigating AI risks