Classifies prohibited and "high-impact" AI use cases based on risks to national security, human rights, and effect on Federal personnel. Defines minimum risk assessment standards, mandates monitoring mechanisms for high-impact AI and training guidelines for the development and use of AI systems.
Analysis summaries, actor details, and coverage mappings were LLM-classified and may contain errors.
This is a binding executive framework issued by the Executive Office of the President with mandatory requirements, enforcement mechanisms, and clear accountability structures for federal agencies in national security contexts.
The document has good coverage of approximately 12-14 subdomains, with strong focus on discrimination and bias (1.1, 1.3), privacy and security (2.1, 2.2), misinformation (3.1), malicious actors (4.1, 4.2, 4.3), human-computer interaction (5.1, 5.2), governance (6.5), and AI safety failures (7.1, 7.2, 7.3, 7.4). Coverage is concentrated in security, misuse prevention, governance oversight, and AI safety domains relevant to national security contexts.
This framework primarily governs the National Security sector, with explicit coverage of military, intelligence, and defense operations. It also has significant coverage of Public Administration (excluding National Security) through its application to federal agencies, and touches on Immigration services within government operations.
The document comprehensively covers all AI lifecycle stages with particular emphasis on Verify and Validate, Deploy, and Operate and Monitor stages. It addresses planning through risk assessments, data collection and processing through data management policies, model building through development requirements, extensive validation and testing requirements, deployment notification and approval processes, and ongoing monitoring and evaluation mechanisms.
The document explicitly mentions AI systems and AI models throughout, with particular focus on high-impact AI use cases in national security contexts. It does not explicitly define or distinguish between frontier AI, general purpose AI, task-specific AI, foundation models, generative AI, or predictive AI. There is no mention of compute thresholds or open-weight/open-source models. The framework applies broadly to AI as a component of National Security Systems without technical categorization by model type.
Executive Office of the President, National Security Council (NSC)
The document is issued by the Executive Office of the President and references NSC Deputies Committee approval processes for updates, indicating these are the proposing authorities.
Chief AI Officers, AI Governance Boards, Department Heads, privacy and civil liberties officers, Assistant to the President for National Security Affairs (APNSA), National Security Council
The document establishes multiple enforcement roles including Chief AI Officers with waiver authority, AI Governance Boards for oversight, and reporting requirements to APNSA and Department Heads.
Chief AI Officers, privacy and civil liberties officers, AI Governance Boards, oversight officials, APNSA, Department Heads
The framework establishes comprehensive monitoring through Chief AI Officers maintaining inventories, privacy and civil liberties officers conducting annual reviews, and reporting to APNSA.
Covered agencies (federal agencies using AI as component of National Security Systems), Department of Defense, Intelligence Community, Department Heads, components/sub-agencies
The framework explicitly applies to federal agencies using AI in national security contexts, including both developers and deployers of AI systems within these agencies.
15 subdomains (12 Good, 3 Minimal)