This page is still being polished. If you have thoughts, please share them via the feedback form.
Data on this page is preliminary and may change. Please do not share or cite these figures publicly.
Red teaming, capability evaluations, adversarial testing, and performance verification.
Also in Risk & Assurance
Assessing and monitoring AI models with regard to red-line risk or capability thresholds set by a third-party, such as a standardization organization or regulator. Companies would further need to make technical, legal, and organizational preparations to halt development and deployment immediately when a breach occurs.
Multiple experts agreed that risk thresholds could be helpful, especially for chemical, biological, radiological, and nuclear (CBRN) risks; and infrastructure risks. However, many also noted challenges in implementation. Several experts mentioned difficulties in defining and operationalising appropriate thresholds, particularly for more abstract risks like bias or effects on democratic processes. Several experts emphasised the importance of third-party evaluation and legally defined red lines. Some suggested this approach could build useful norms and mechanisms within AI companies. Multiple experts expressed scepticism about companies actually halting development if thresholds were breached, citing competitive pressures. Several noted that risk thresholds might be more effective for certain types of risks (e.g. chemical, biological, radiological, and nuclear (CBRN) risks) than others (e.g. bias). Several experts highlighted the need for ongoing research and flexibility in setting thresholds, as our understanding of AI risks evolves. Several mentioned that thresholds alone are insufficient and should be part of a broader regulatory approach. Some individual experts raised specific points, such as the potential for thresholds to become bureaucratic box-ticking exercises, the challenge of applying thresholds to open-source models, and the possibility that the most effective offensive systems might also be the best defensive ones.
Reasoning
Organization tests models against third-party-defined capability thresholds to determine deployment readiness.
Pre-deployment risk assessments
Comprehensive risk assessments before deployment that would assess reasonably foreseeable misuse and include dangerous capability evaluations that incorporate post-training enhancements and collaborations with domain experts. Risk assessments would inform deployment decisions.
2.2.1 Risk AssessmentThird party pre-deployment model audits
External pre-deployment assessment to provide a judgment on the safety of a model. Auditors, which could be governments or independent third parties, would receive access to a fine-tuning API for testing, or further appropriate technical means.
2.2.3 Auditing & ComplianceExternal assessment of testing procedure
Bringing in external AI evaluation firms before deployment to assess and red-team the company's execution of dangerous capabilities evaluations.
2.2.2 Testing & EvaluationVetted researcher access
Giving good faith, public interest evaluation researchers access to black-box research APIs that provide technical and legal safe harbours to limit barriers imposed by usage policy enforcement, logging, and stringent terms of service.
2.3.1 Deployment ManagementAdvanced model access for vetted external researchers
Examples of advanced access rights could include any of the following: increased control over sampling, access to fine-tuning functionality, the ability to inspect and modify model internals, access to training data, or additional features like stable model versions.
2.2.2 Testing & EvaluationData curation
Careful data curation prior to all development stages (including fine-tuning) to filter out high-risk content and ensure the training data is sufficiently high-quality.
1.1.1 Training DataEffective Mitigations for Systemic Risks from General-Purpose AI
Uuk, Risto; Brouwer, Annemieke; Schreier, Tim; Dreksler, Noemi; Pulignano, Valeria; Bommasani, Rishi (2024)
The systemic risks posed by general-purpose AI models are a growing concern, yet the effectiveness of mitigations remains underexplored. Previous research has proposed frameworks for risk mitigation, but has left gaps in our understanding of the perceived effectiveness of measures for mitigating systemic risks. Our study addresses this gap by evaluating how experts perceive different mitigations that aim to reduce the systemic risks of general-purpose AI models. We surveyed 76 experts whose expertise spans AI safety; critical infrastructure; democratic processes; chemical, biological, radiological, and nuclear risks (CBRN); and discrimination and bias. Among 27 mitigations identified through a literature review, we find that a broad range of risk mitigation measures are perceived as effective in reducing various systemic risks and technically feasible by domain experts. In particular, three mitigation measures stand out: safety incident reports and security information sharing, third-party pre-deployment model audits, and pre-deployment risk assessments. These measures show both the highest expert agreement ratings (>60\%) across all four risk areas and are most frequently selected in experts' preferred combinations of measures (>40\%). The surveyed experts highlighted that external scrutiny, proactive evaluation and transparency are key principles for effective mitigation of systemic risks. We provide policy recommendations for implementing the most promising measures, incorporating the qualitative contributions from experts. These insights should inform regulatory frameworks and industry practices for mitigating the systemic risks associated with general-purpose AI.
Other (multiple stages)
Applies across multiple lifecycle stages
Other
Actor type not captured by the standard categories
Measure
Quantifying, testing, and monitoring identified AI risks