This page is still being polished. If you have thoughts, please share them via the feedback form.
Data on this page is preliminary and may change. Please do not share or cite these figures publicly.
Technical mechanisms operating on non-model components of the AI system without modifying model weights. Components include: input/output interfaces, runtime environment, guardrail/monitoring classifiers, tool chain, and hardware.
Also in AI System
AI control encompasses methods designed to safely extract useful work from potentially misaligned AI systems through constraints, monitoring, and termination capabilities, while preventing them from subverting safety measures even if they actively try to do so.
Reasoning
Constraints and termination capabilities are runtime execution controls; monitoring is behavioral observation. Mitigation spans multiple 1.2 L3 categories without clear primacy.
Reduce Hallucinations
Reduce hallucination refers to techniques and methods used to minimize AI systems' tendency to generate false or fabricated information, addressing a critical challenge where language models produce inaccurate facts or citations that could spread misinformation.
1 AI SystemMitigate Hallucinations
Technical approaches to reduce LLM hallucinations - instances where AI models generate false or unsupported information while appearing confident in their responses
1 AI SystemDetecting AI-Generated Content
Detecting AI-generated content involves technical methods and tools to identify whether content was created by artificial intelligence or humans, primarily through watermarking, linguistic analysis, and machine learning approaches.
1.2.5 Provenance & WatermarkingRisks from Persuasion
Risk that AI systems can systematically influence human beliefs and behaviors through sustained, personalized interactions by exploiting cognitive biases and adapting in real-time, enabling large-scale manipulation without human intervention.
99 OtherContent Moderation
Content moderation systems enable detecting and filtering toxic content (hate speech, harassment, misinformation) in real-time on digital platforms, while maintaining transparency in moderation decisions.
1.2.1 Guardrails & FilteringMake AI Manipulation Use Illegal
Legal framework to criminalize the malicious use of AI for manipulation of individuals or groups, including the creation and deployment of deepfakes and automated influence campaigns.
3.1.1 Legislation & PolicyGlobal Risk and AI Safety Preparedness (GRASP)
Hodes, Cyrus; Salem, Fadi; Corruble, Vincent; Ségerie, Charbel-Raphaël; Claybrough, Jonathan; Veron, Thibaud; Majid, Zainab; Fan, Jinyu; Lorin, Amaury (2025)
Project GRASP (Global Risk and AI Safety Preparedness) is a comprehensive database mapping AI risks and mitigation solutions. The initiative addresses both endogenous risk (autonomous AI systems that behave outside of human supervision) and exogenous risk (the human misuse of those AI systems). The platform serves policymakers, researchers, and industry leaders by providing tools required to identify risks, understand solutions, and find innovations.
Other (stage not listed)
Applies to a lifecycle stage not captured by the standard categories
Developer
Entity that creates, trains, or modifies the AI system
Manage
Prioritising, responding to, and mitigating AI risks