MIT AI Risk Navigator
Explore the full landscape of AI risk in one place. The Navigator connects MIT's AI Risk Repository datasets through a shared taxonomy, so you can move between them and surface the patterns that matter.
Built for researchers, policymakers, auditors, and anyone working to understand and manage AI risk. A project of the AI Risk Initiative at MIT FutureTech, with the generous support of the Cambridge Boston Alignment Initiative.
The AI risk landscape
Seven domains capture the full scope of AI risk, from discrimination and toxicity to AI system safety and socioeconomic disruption. Select a domain to explore its subdomains.
Recent incidents
All incidentsThe latest publicly reported AI incidents, drawn from the AI Incident Database and classified by risk domain.
Purportedly AI-Enhanced Images of Iranian Women Protesters Were Reportedly Spread With Unverified Execution Claims
Purported Deepfake Video Reportedly Portrayed Nirmala Sitharaman Endorsing Investment Scheme
South Africa Draft National AI Policy Reportedly Included Fictitious References Believed to Be AI Hallucinations
KBS AI Translation Subtitles Reportedly Broadcast Profanity During Artemis II Launch Livestream
Baidu Apollo Go Robotaxis Stopped in Traffic During Reported System Failure in Wuhan, Stranding Some Passengers
Purported AI-Generated Impersonations of Albanian Cardiologist Spiro Qirko and Journalist Ilir Topi Were Reportedly Used on Facebook to Promote Hypertension Product in Kosovo
How governance is responding
All governanceKey laws, regulations, and standards shaping AI policy pulled from ETO's AGORA dataset.
General Purpose AI Code of Practice, Transparency Chapter
EU AI Act
2025 AI Action Plan
California SB 53 March 2025 (CalCompute and Whistleblowers)
Executive Order on Removing Barriers To American Leadership In Artificial Intelligence
NIST AI Risk Management Framework
How to reduce AI risk
All mitigationsNotable mitigation actions drawn from major AI risk frameworks.
Red Teaming
An exercise in which a group of people or automated systems pretend to be an adversary and attack an organisation’s systems in order to identify vulnerabilities.
Source: International AI Safety Report
AI-Generated Content Watermarking
Are the outputs of your firm’s AI systems tagged with watermarks that indicate that an AI generates the material? - Video - Image
Source: FLI AI Safety Index 2024
Whistleblower protections
Regulations can explicitly prevent retaliation and offer incentives for whistleblowers who report violations of those regulations.
Source: Pitfalls of Evidence-Based AI Policy
Independent Third-Party Evaluations
Independent third parties should vet evaluation protocols. These third parties should also be granted permission and resources to independently perform their evaluations, verifying the accuracy of the results.
Source: A Frontier AI Risk Management Framework
Establish AI decision explanation framework
Implement mechanisms and tools for generating human-understandable explanations of AI system decisions, including feature importance, decision paths, confidence levels, and clear attribution of data sources and their characteristics used during inference.
Source: The Unified Control Framework
Content Provenance Review
Define organizational responsibilities for periodic review of content provenance and incident monitoring for GAI systems.
Source: Artificial Intelligence Risk Management Framework