This page is still being polished. If you have thoughts, please share them via the feedback form.
Data on this page is preliminary and may change. Please do not share or cite these figures publicly.
Independent audits, third-party reviews, and regulatory compliance verification.
Also in Risk & Assurance
Reasoning
Third-party audit verifies AI system compliance with safety standards and commitments.
Model evaluations
Pre-deployment access by third-party auditors
Prior to full deployment of general-purpose AI models, a group of third-party auditors who are not selected by the GPAI model provider could get early access to AI models in order to evaluate them from a variety of different perspectives and with diverse interests [30, 157]. This prevents cases where the developers of AI models select auditors that are especially favorable to the developers, which could result in biased or incomplete evaluations, or contribute to an unjustified public perception of the capabilities and risks of the model.
2.2.3 Auditing & ComplianceAudits with specific scoped goals
Audits of AI systems will be easier to perform, and have clearer results if the scope and goals of the evaluation are formulated as precisely as possible [157], or have a connection to concrete existing policy.
2.2.3 Auditing & ComplianceModel development
2.4 Engineering & DevelopmentModel development > Data-related
1.1 ModelModel evaluations
2.2.2 Testing & EvaluationModel evaluations > General evaluations
2.2.2 Testing & EvaluationModel evaluations > Benchmarking
3.2.1 Benchmarks & EvaluationModel evaluations > Red teaming
2.2.2 Testing & EvaluationRisk Sources and Risk Management Measures in Support of Standards for General-Purpose AI Systems
Gipiškis, Rokas; San Joaquin, Ayrton; Chin, Ze Shen; Regenfuß, Adrian; Gil, Ariel; Holtman, Koen (2024)
Organizations and governments that develop, deploy, use, and govern AI must coordinate on effective risk mitigation. However, the landscape of AI risk mitigation frameworks is fragmented, uses inconsistent terminology, and has gaps in coverage. This paper introduces a preliminary AI Risk Mitigation Taxonomy to organize AI risk mitigations and provide a common frame of reference. The Taxonomy was developed through a rapid evidence scan of 13 AI risk mitigation frameworks published between 2023-2025, which were extracted into a living database of 831 distinct AI risk mitigations.
Verify and Validate
Testing, evaluating, auditing, and red-teaming the AI system
Governance Actor
Regulator, standards body, or oversight entity shaping AI policy
Measure
Quantifying, testing, and monitoring identified AI risks