Skip to main content

MIT AI Risk Navigator

Explore the full landscape of AI risk in one place. The Navigator connects MIT's AI Risk Repository datasets through a shared taxonomy, so you can move between them and surface the patterns that matter.

Built for researchers, policymakers, auditors, and anyone working to understand and manage AI risk. A project of the AI Risk Initiative at MIT FutureTech, with the generous support of the Cambridge Boston Alignment Initiative.

The AI risk landscape

Seven domains capture the full scope of AI risk, from discrimination and toxicity to AI system safety and socioeconomic disruption. Select a domain to explore its subdomains.

Recent incidents

All incidents

The latest publicly reported AI incidents, drawn from the AI Incident Database and classified by risk domain.

How governance is responding

All governance

Key laws, regulations, and standards shaping AI policy pulled from ETO's AGORA dataset.

How to reduce AI risk

All mitigations

Notable mitigation actions drawn from major AI risk frameworks.

3.1Testing & Auditing

Red Teaming

An exercise in which a group of people or automated systems pretend to be an adversary and attack an organisation’s systems in order to identify vulnerabilities.

Source: International AI Safety Report

2.4Content Safety Controls

AI-Generated Content Watermarking

Are the outputs of your firm’s AI systems tagged with watermarks that indicate that an AI generates the material? - Video - Image

Source: FLI AI Safety Index 2024

1.4Whistleblower Reporting & Protection

Whistleblower protections

Regulations can explicitly prevent retaliation and offer incentives for whistleblowers who report violations of those regulations.

Source: Pitfalls of Evidence-Based AI Policy

4.5Third-Party System Access

Independent Third-Party Evaluations

Independent third parties should vet evaluation protocols. These third parties should also be granted permission and resources to independently perform their evaluations, verifying the accuracy of the results.

Source: A Frontier AI Risk Management Framework

4.6User Rights & Recourse

Establish AI decision explanation framework

Implement mechanisms and tools for generating human-understandable explanations of AI system decisions, including feature importance, decision paths, confidence levels, and clear attribution of data sources and their characteristics used during inference.

Source: The Unified Control Framework

3.5Post-deployment Monitoring

Content Provenance Review

Define organizational responsibilities for periodic review of content provenance and incident monitoring for GAI systems.

Source: Artificial Intelligence Risk Management Framework