This page is still being polished. If you have thoughts, please share them via the feedback form.
Data on this page is preliminary and may change. Please do not share or cite these figures publicly.
Oversight agencies, supervisory organizations, and regulatory authorities for AI governance.
Also in Legal & Regulatory
As deepfake technology becomes more accessible and tools become easier to use, the potential for fraudulent activities, blackmail, and identity theft has grown exponentially. Today, the quality of deepfakes has reached a point where even experienced observers struggle to distinguish between genuine footage and manipulated media. As a result, the very authenticity of digital content is under siege, raising urgent questions about the future of trust in online information. While there have been efforts to develop technical defenses against deepfakes, such as adding defenses (e.g., adversarial noise) to photos posted online to make them unreadable by AI, these measures have proven largely empirically ineffective. Every type of defense has been bypassed by attacks, hence there is no perfect technical solution to counter this (Segerie, 2024). Therefore, to counter the growing threat of deepfake-related crimes, the primary solution is to establish stricter norms and stronger supervision.
Reasoning
Deepfake mitigation methods detect and attribute AI-generated content through technical mechanisms like watermarking and detection systems.
Misuse
Laws and penalties
Governments must prioritize the enactment and enforcement of robust legal frameworks that directly address the malicious use of deepfake technology. This includes criminalizing the creation and distribution of non-consensual deepfake pornography. By imposing severe penalties and establishing clear legal repercussions, lawmakers can create strong deterrents that discourage the misuse of deepfake technology.
3.1.5 Enforcement MechanismsContent Moderation and Platform Accountability
Online platforms bear significant responsibility for curbing the spread of harmful AI-generated content. To effectively counter the proliferation of deepfakes and other deceptive media, platforms should be required to proactively detect and remove problematic AI-generated content, false information, and privacy-invading materials. Crucially, platforms should be held accountable through fines or other penalties if they fail to take timely and effective action against such content. Education and Public Awareness Public awareness campaigns should focus on
3.1.5 Enforcement MechanismsEducation and Public Awareness
Public awareness campaigns should focus on teaching people how to critically evaluate digital content and recognize potential deepfakes. By fostering a more informed and skeptical public, we can reduce the impact of false information and prevent individuals from falling victim to AI-driven scams and deceptions
2.4.4 Training & AwarenessValue Misalignment
99.9 OtherValue Misalignment > Mitigating social bias
1 AI SystemValue Misalignment > Privacy protection
1 AI SystemValue Misalignment > Methods for mitigating toxicity
1 AI SystemValue Misalignment > Methods for mitigating LLM amorality
1 AI SystemRobustness to attack
1 AI SystemLarge Language Model Safety: A Holistic Survey
Shi, Dan; Shen, Tianhao; Huang, Yufei; Li, Zhigen; Leng, Yongqi; Jin, Renren; Liu, Chuang; Wu, Xinwei; Guo, Zishan; Yu, Linhao; Shi, Ling; Jiang, Bojian; Xiong, Deyi (2024)
The rapid development and deployment of large language models (LLMs) have introduced a new frontier in artificial intelligence, marked by unprecedented capabilities in natural language understanding and generation. However, the increasing integration of these models into critical applications raises substantial safety concerns, necessitating a thorough examination of their potential risks and associated mitigation strategies. This survey provides a comprehensive overview of the current landscape of LLM safety, covering four major categories: value misalignment, robustness to adversarial attacks, misuse, and autonomous AI risks. In addition to the comprehensive review of the mitigation methodologies and evaluation resources on these four aspects, we further explore four topics related to LLM safety: the safety implications of LLM agents, the role of interpretability in enhancing LLM safety, the technology roadmaps proposed and abided by a list of AI companies and institutes for LLM safety, and AI governance aimed at LLM safety with discussions on international cooperation, policy proposals, and prospective regulatory directions. Our findings underscore the necessity for a proactive, multifaceted approach to LLM safety, emphasizing the integration of technical solutions, ethical considerations, and robust governance frameworks. This survey is intended to serve as a foundational resource for academy researchers, industry practitioners, and policymakers, offering insights into the challenges and opportunities associated with the safe integration of LLMs into society. Ultimately, it seeks to contribute to the safe and beneficial development of LLMs, aligning with the overarching goal of harnessing AI for societal advancement and well-being. A curated list of related papers has been publicly available at https://github.com/tjunlp-lab/Awesome-LLM-Safety-Papers.
Other (outside lifecycle)
Outside the standard AI system lifecycle
Governance Actor
Regulator, standards body, or oversight entity shaping AI policy
Govern
Policies, processes, and accountability structures for AI risk management