Directs the U.S. government to lead safe AI development, enhance national security through AI, and foster global AI governance. Prioritizes AI safety, security, and talent acquisition. Establishes AI governance frameworks and partnerships to ensure responsible AI use and international collaboration.
Analysis summaries, actor details, and coverage mappings were LLM-classified and may contain errors.
This is a Presidential National Security Memorandum issued by the Executive Office of the President with binding legal authority over federal agencies. It contains mandatory directives with specific timelines, enforcement mechanisms, and reporting requirements.
The document has good coverage of approximately 12-14 subdomains, with strong focus on malicious actors (4.1, 4.2, 4.3), AI system security (2.2), competitive dynamics (6.4), governance structures (6.5), and AI safety failures (7.1, 7.2, 7.3). Coverage is concentrated in security, misuse prevention, AI safety, and governance domains. Limited coverage of discrimination, privacy, misinformation, and socioeconomic impacts.
This document primarily governs AI use within National Security sectors, with comprehensive coverage of Public Administration (excluding National Security) and National Security sectors. It also has moderate coverage of Scientific Research and Development Services, Information, and Professional and Technical Services sectors due to provisions on AI research, development, and technical expertise. Other sectors receive minimal or no direct governance.
The document comprehensively addresses multiple AI lifecycle stages with particular emphasis on Verify and Validate, Deploy, and Operate and Monitor stages. It establishes extensive testing, evaluation, and monitoring frameworks while also covering planning, design, and model development aspects. The Build and Use Model stage receives moderate coverage, while data collection and processing receives minimal direct attention.
The document explicitly addresses AI models, AI systems, and frontier AI models with detailed definitions and governance provisions. It does not explicitly define or distinguish between general-purpose AI, task-specific AI, foundation models, generative AI, or predictive AI, though it references 'general-purpose models' and 'dual-use foundation models.' It establishes compute-based testing thresholds and addresses open-weight models in the context of security testing.
Executive Office of the President, President Joseph R. Biden Jr., National Security Council
This memorandum is issued by the President through the Executive Office of the President as a National Security Memorandum, with coordination through the National Security Council staff and the Assistant to the President for National Security Affairs.
Assistant to the President for National Security Affairs (APNSA), National Security Council, Office of Management and Budget (OMB), Chief AI Officers, AI Governance Boards, National Manager for NSS (Director of NSA), Federal Acquisition Regulatory Council (FARC)
Enforcement is primarily through executive oversight mechanisms including the APNSA, NSC coordination, mandatory reporting to the President, Chief AI Officers with governance authority, and AI Governance Boards within each covered agency.
AI Safety Institute (AISI) within NIST, Assistant to the President for National Security Affairs (APNSA), Chief AI Officers, AI Governance Boards, AI National Security Coordination Group, privacy and civil liberties officials, Office of the Director of National Intelligence (ODNI)
Monitoring is conducted through multiple mechanisms including AISI's safety testing and reporting, annual agency reports to the President through APNSA, Chief AI Officers maintaining inventories, and dedicated coordination groups tracking implementation.
Department of Defense (DOD), Intelligence Community (IC), Department of State, Department of Homeland Security (DHS), Department of Energy (DOE), Department of Justice (DOJ), Department of Commerce (Commerce), National Security Agency (NSA), Central Intelligence Agency (CIA), all covered agencies using AI on National Security Systems, private sector AI developers (voluntary participation)
The memorandum primarily targets federal agencies involved in national security, particularly those developing or deploying AI on National Security Systems. It also establishes voluntary engagement mechanisms with private sector AI developers for safety testing.
18 subdomains (8 Good, 10 Minimal)