This page is still being polished. If you have thoughts, please share them via the feedback form.
Data on this page is preliminary and may change. Please do not share or cite these figures publicly.
Governance frameworks, formal policies, and strategic alignment mechanisms.
Also in Oversight & Accountability
establishes leadership, roles and policies for AI oversight. Ensures commitment to rights, privacy and ethical use, with input from internal and external stakeholders
Reasoning
Establishes governance framework spanning leadership oversight, roles, and policies for AI oversight systems.
Establish governance structure
Organizations should establish a governance structure that supports the trustworthy, lawful and resilient development and operation of AI systems. This structure should align with the EU AI Act, the GDPR and ISO/IEC 42001 and 23894
2.1 Oversight & AccountabilityDemonstrate leadership
Senior management should demonstrate leadership by allocating resources, setting strategic goals and formalizing policies that reflect the organization’s commitment to responsible AI. An AI policy should be created to outline principles related to fundamental rights (FRIA), personal data protection (DPIA) and risk-based thinking (ISO/IEC 23894).
2.1.3 Policies & ProceduresDefine and document roles and responsibilities
Organizations should define and document roles and responsibilities for managing AI risks, ensuring clear accountability for FRIA, DPIA and compliance processes. They should also establish mechanisms for engaging with external stakeholders (including users, affected communities and regulators) to promote transparency and build trust
2.1.2 Roles & AccountabilityRisk Identification Layer
identifies legal, ethical, privacy and societal risks based on how and where the AI system is used. It uses FRIA, DPIA and ISO methods, with attention to affected groups and contexts
2.2.1 Risk AssessmentRisk Identification Layer > Identify all risks
Organizations should systematically identify all relevant legal, ethical, privacy and operational risks across the lifecycle of the AI system. This includes applying the FRIA, DPIA and ISO standards in a coordinated manner. They should begin with planning and scoping to understand the system’s purpose, the individuals it may impact and the societal or regulatory context. Personal data risks should be analyzed in line with DPIA guidelines, with a focus on profiling, automated decision-making and sensitive data.
2.2.1 Risk AssessmentRisk Identification Layer > Evaluate internal and external factors
Organizations should also evaluate internal and external factors that influence risk, such as strategic goals, regulatory trends and public expectations. Risk identification should involve consultation with stakeholders to uncover contextspecific and overlooked risks, specially those affecting vulnerable groups.
2.2.1 Risk AssessmentRisk Assessment Layer
evaluates the severity and likelihood of each risk, considering impacts on individuals, society and the organization. Risks are assessed separately for rights, data and operations.
2.2.1 Risk AssessmentRisk Assessment Layer > Assess risks
Organizations should assess the identified risks by evaluating their likelihood, severity and impact on individuals, groups and organizational operations. Each fundamental right should be assessed separately using a structured FRIA approach (with no aggregation across rights) while DPIA should guide the evaluation of data protection risks like unauthorized access, bias or re-identification.
2.2.1 Risk AssessmentRisk Assessment Layer > Risk analysis
Organizations should apply ISO-based risk analysis to consider operational risks such as system reliability, model drift and exposure to adversarial attacks. Both human and organizational impacts should be taken into account to support ethical and business-aligned decision-making
2.2.1 Risk AssessmentStructuring AI Risk Management Framework: EU AI Act FRIA, GDPR DPIA and ISO 42001/23894
Parlov, Natalija; Mateša, Blanka; Mladinić, Anamarija (2025)
The growing regulatory focus on trustworthy AI systems has accelerated the need for integrated approaches to AI risk management. This paper presents a structured framework that aligns the EU AI Act's Fundamental Rights Impact Assessment (FRIA) and the GDPR's Data Protection Impact Assessment (DPIA) with the risk management principles and processes of ISO/IEC 42001 and ISO/IEC 23894. The aim is to support organizations in addressing legal, ethical, privacy and operational risks through unified, standards-aligned approach.It is hypothesized that embedding FRIA and DPIA procedures within ISO-compliant risk management structures can streamline compliance, strengthen governance and promote accountability and transparency. The proposed framework outlines six core phases: governance, risk identification, risk assessment, integrated impact assessment, risk treatment and monitoring and review. A dynamic feedback mechanism enables continuous improvement and adaptation to emerging risks and evolving societal expectations.By structuring these components into a coherent framework, the research supports organizations in aligning regulatory obligations with international best practices, reducing redundancy and advancing responsible, resilient AI innovation. © 2025 IEEE.
Other (outside lifecycle)
Outside the standard AI system lifecycle
Governance Actor
Regulator, standards body, or oversight entity shaping AI policy
Govern
Policies, processes, and accountability structures for AI risk management
Primary
6.5 Governance failure