This page is still being polished. If you have thoughts, please share them via the feedback form.
Data on this page is preliminary and may change. Please do not share or cite these figures publicly.
Oversight agencies, supervisory organizations, and regulatory authorities for AI governance.
Also in Legal & Regulatory
Establish a mechanism to assess and monitor potential effects of frontier AI systems on the top ten most vulnerable National Critical Functions. (5.3.3 | Application to national critical functions) These effects should be re-evaluated at least once every 1-2 years, and should be informed by the “effect on model” and “effect on world” databases described in recommendations 5 and 6.
Actors: CISA
Reasoning
Assess and monitor effects of frontier AI on critical functions—structured risk identification and prioritization activity within organization's control.
Functional: Identify essential categories of safety and security activities (“functions”)
Identify essential categories of safety and security activities (“functions”) that an organization must perform, and map these to a specified set of outcomes. This helps organizations to organize their risk management activities at a high level, and to assess if these activities are achieving the necessary outcomes. A functional approach is particularly helpful for identifying cross-cutting categories (e.g., organizational governance or insider security) that provide resilience against multiple known and unknown risks. It is also the most ready-to-adopt, based on the National Institute of Standards and Technology (NIST) AI Risk Management Framework (RMF) and supplementary guidance from other researchers that begins to adapt this framework to cover catastrophic risks from frontier AI. We recommend that NIST or the Frontier Model Forum (FMF) establish consensus on the highest-priority categories of activities for frontier AI developers and develop a detailed catalog of measures (“controls”) for these activities.
3.2.2 Technical StandardsFunctional: Identify essential categories of safety and security activities (“functions”) > Establish consensus on which categories of activities in the NIST AI RMF are the highest priority for frontier AI developers.
Establish consensus on which categories of activities in the NIST AI RMF are the highest priority for frontier AI developers. (3.3.1 | The NIST AI RMF) NIST and/or the FMF, with researcher input, should identify high-priority categories for frontier AI safety and security. To ensure defense-in-depth, frontier AI developers should implement multiple independent measures for these categories.
3.3.1 Industry CoordinationFunctional: Identify essential categories of safety and security activities (“functions”) > Develop a detailed catalog of measures (“controls”) that are important for frontier AI safety and security.
Develop a detailed catalog of measures (“controls”) that are important for frontier AI safety and security. (3.3.3 | Providing detailed controls) For instance, NIST SP 800-53 lists 1,000 detailed controls for cybersecurity across 20 “families.” No current equivalent exists for AI, and it would be useful for frontier AI developers to have a similar catalog focused on frontier AI safety and security.
3.2.2 Technical StandardsLifecycle: Describe the frontier AI development lifecycle
Describe the frontier AI development lifecycle and identify risk management activities that the organization must perform at each phase. This helps integrate safety and security into all stages of development, deployment, and monitoring. In cybersecurity, it has helped advance a “shift left” approach, i.e., designing safety into systems during development and tackling issues early in the software development lifecycle. While some AI development lifecycle frameworks exist, they need additional work to adapt to a frontier AI context and map appropriate risk management activities at each stage. We recommend that the FMF develop a consensus model that captures these key activities for developers, and that AI developers, philanthropists, and government funders pursue research supporting a “shift left” for frontier AI safety and security.
3.2.2 Technical StandardsLifecycle: Describe the frontier AI development lifecycle > Establish a detailed lifecycle framework for frontier AI that describes safety and security activities at each stage.
Establish a detailed lifecycle framework for frontier AI that describes safety and security activities at each stage. (4.3.2 | Proposed lifecycle framework) This framework can build on work by the OECD while incorporating details from frontier AI developers, and should map activities to the NIST AI RMF where possible. It should ensure all phases are appropriately covered, which could include a “shift left” (see recommendation 4), and a stage for post-deployment monitoring and response.
3.2.2 Technical StandardsLifecycle: Describe the frontier AI development lifecycle > Pursue research that supports a “shift left”
Pursue research that supports a “shift left” for frontier AI by emphasizing safety and security activities earlier in the development cycle. (4.3.3.1 | “Shifting left” on AI safety and security; 6.2.2 | Lifecycle) Potential research areas could include: software requirement specification techniques borrowed from safety-critical domains, dataset curation techniques, and foundational research to build safer and more secure AI systems.
2.4.1 Research & FoundationsAdapting cybersecurity frameworks to manage frontier AI risks: A defense-in-depth approach
Ee, Shaun; O'Brien, Joe; Williams, Zoe; El-Dakhakhni, Amanda; Aird, Michael; Lintz, Alex (2024)
The complex and evolving threat landscape of frontier AI development requires a multi-layered approach to risk management ("defense-in-depth"). By reviewing cybersecurity and AI frameworks, we outline three approaches that can help identify gaps in the management of AI-related risks. First, a functional approach identifies essential categories of activities ("functions") that a risk management approach should cover, as in the NIST Cybersecurity Framework (CSF) and AI Risk Management Framework (AI RMF). Second, a lifecycle approach instead assigns safety and security activities across the model development lifecycle, as in DevSecOps and the OECD AI lifecycle framework. Third, a threat-based approach identifies tactics, techniques, and procedures (TTPs) used by malicious actors, as in the MITRE ATT&CK and MITRE ATLAS databases. We recommend that frontier AI developers and policymakers begin by adopting the functional approach, given the existence of the NIST AI RMF and other supplementary guides, but also establish a detailed frontier AI lifecycle model and threat-based TTP databases for future use.
Other (stage not listed)
Applies to a lifecycle stage not captured by the standard categories
Governance Actor
Regulator, standards body, or oversight entity shaping AI policy
Measure
Quantifying, testing, and monitoring identified AI risks