This page is still being polished. If you have thoughts, please share them via the feedback form.
Data on this page is preliminary and may change. Please do not share or cite these figures publicly.
Non-Model mitigations not clearly fitting above categories.
Also in Non-Model
Implement mechanisms and tools for generating human-understandable explanations of AI system decisions, including feature importance, decision paths, confidence levels, and clear attribution of data sources and their characteristics used during inference.
Reasoning
Generates human-understandable explanations of AI decisions at runtime; technical explainability mechanism not clearly fitting other non-model categories.
Establish AI system access controls
Implement comprehensive access management including role-based access control (RBAC), authentication mechanisms, and audit logging for AI models and associated resources.
2.3.2 Access & Security ControlsImplement AI asset protection framework
Deploy technical protection measures including encryption, secure enclaves, and versioning controls for AI models and associated data.
1.2.4 Security InfrastructureEstablish security validation framework
Execute comprehensive pre-deployment security validation including AI-specific vulnerability assessments, penetration testing, and security requirement verification.
2.2.2 Testing & EvaluationImplement continuous security testing system
Deploy ongoing security testing mechanisms including automated vulnerability scanning, continuous security monitoring, and periodic reassessment of security controls.
2.2 Risk & AssuranceImplement AI security defense system
Deploy active defense mechanisms combining continuous security monitoring, input validation, adversarial detection, and adaptive response capabilities specific to AI systems.
1.2 Non-ModelEstablish AI system integration framework
Define and implement a comprehensive framework for AI system integration including architecture review, compatibility testing, and integration validation processes.
2.2.2 Testing & EvaluationThe Unified Control Framework: Establishing a Common Foundation for Enterprise AI Governance, Risk Management and Regulatory Compliance
Eisenberg, Ian W.; Gamboa, Lucía; Sherman, Eli (2025)
The rapid adoption of AI systems presents enterprises with a dual challenge: accelerating innovation while ensuring responsible governance. Current AI governance approaches suffer from fragmentation, with risk management frameworks that focus on isolated domains, regulations that vary across jurisdictions despite conceptual alignment, and high-level standards lacking concrete implementation guidance. This fragmentation increases governance costs and creates a false dichotomy between innovation and responsibility. We propose the Unified Control Framework (UCF): a comprehensive governance approach that integrates risk management and regulatory compliance through a unified set of controls. The UCF consists of three key components: (1) a comprehensive risk taxonomy synthesizing organizational and societal risks, (2) structured policy requirements derived from regulations, and (3) a parsimonious set of 42 controls that simultaneously address multiple risk scenarios and compliance requirements. We validate the UCF by mapping it to the Colorado AI Act, demonstrating how our approach enables efficient, adaptable governance that scales across regulations while providing concrete implementation guidance. The UCF reduces duplication of effort, ensures comprehensive coverage, and provides a foundation for automation, enabling organizations to achieve responsible AI governance without sacrificing innovation speed.
Build and Use Model
Training, fine-tuning, and integrating the AI model
Developer
Entity that creates, trains, or modifies the AI system
Manage
Prioritising, responding to, and mitigating AI risks