This page is still being polished. If you have thoughts, please share them via the feedback form.
Data on this page is preliminary and may change. Please do not share or cite these figures publicly.
Structured analysis to identify, characterize, and prioritize potential harms and risks.
Also in Risk & Assurance
identifies legal, ethical, privacy and societal risks based on how and where the AI system is used. It uses FRIA, DPIA and ISO methods, with attention to affected groups and contexts
Reasoning
Identifies and analyzes legal, ethical, privacy, and societal risks through structured assessment methodologies (FRIA, DPIA, ISO methods).
Identify all risks
Organizations should systematically identify all relevant legal, ethical, privacy and operational risks across the lifecycle of the AI system. This includes applying the FRIA, DPIA and ISO standards in a coordinated manner. They should begin with planning and scoping to understand the system’s purpose, the individuals it may impact and the societal or regulatory context. Personal data risks should be analyzed in line with DPIA guidelines, with a focus on profiling, automated decision-making and sensitive data.
2.2.1 Risk AssessmentEvaluate internal and external factors
Organizations should also evaluate internal and external factors that influence risk, such as strategic goals, regulatory trends and public expectations. Risk identification should involve consultation with stakeholders to uncover contextspecific and overlooked risks, specially those affecting vulnerable groups.
2.2.1 Risk AssessmentGovernance Layer
establishes leadership, roles and policies for AI oversight. Ensures commitment to rights, privacy and ethical use, with input from internal and external stakeholders
2.1.3 Policies & ProceduresGovernance Layer > Establish governance structure
Organizations should establish a governance structure that supports the trustworthy, lawful and resilient development and operation of AI systems. This structure should align with the EU AI Act, the GDPR and ISO/IEC 42001 and 23894
2.1 Oversight & AccountabilityGovernance Layer > Demonstrate leadership
Senior management should demonstrate leadership by allocating resources, setting strategic goals and formalizing policies that reflect the organization’s commitment to responsible AI. An AI policy should be created to outline principles related to fundamental rights (FRIA), personal data protection (DPIA) and risk-based thinking (ISO/IEC 23894).
2.1.3 Policies & ProceduresGovernance Layer > Define and document roles and responsibilities
Organizations should define and document roles and responsibilities for managing AI risks, ensuring clear accountability for FRIA, DPIA and compliance processes. They should also establish mechanisms for engaging with external stakeholders (including users, affected communities and regulators) to promote transparency and build trust
2.1.2 Roles & AccountabilityRisk Assessment Layer
evaluates the severity and likelihood of each risk, considering impacts on individuals, society and the organization. Risks are assessed separately for rights, data and operations.
2.2.1 Risk AssessmentRisk Assessment Layer > Assess risks
Organizations should assess the identified risks by evaluating their likelihood, severity and impact on individuals, groups and organizational operations. Each fundamental right should be assessed separately using a structured FRIA approach (with no aggregation across rights) while DPIA should guide the evaluation of data protection risks like unauthorized access, bias or re-identification.
2.2.1 Risk AssessmentStructuring AI Risk Management Framework: EU AI Act FRIA, GDPR DPIA and ISO 42001/23894
Parlov, Natalija; Mateša, Blanka; Mladinić, Anamarija (2025)
The growing regulatory focus on trustworthy AI systems has accelerated the need for integrated approaches to AI risk management. This paper presents a structured framework that aligns the EU AI Act's Fundamental Rights Impact Assessment (FRIA) and the GDPR's Data Protection Impact Assessment (DPIA) with the risk management principles and processes of ISO/IEC 42001 and ISO/IEC 23894. The aim is to support organizations in addressing legal, ethical, privacy and operational risks through unified, standards-aligned approach.It is hypothesized that embedding FRIA and DPIA procedures within ISO-compliant risk management structures can streamline compliance, strengthen governance and promote accountability and transparency. The proposed framework outlines six core phases: governance, risk identification, risk assessment, integrated impact assessment, risk treatment and monitoring and review. A dynamic feedback mechanism enables continuous improvement and adaptation to emerging risks and evolving societal expectations.By structuring these components into a coherent framework, the research supports organizations in aligning regulatory obligations with international best practices, reducing redundancy and advancing responsible, resilient AI innovation. © 2025 IEEE.
Plan and Design
Designing the AI system, defining requirements, and planning development
Deployer
Entity that integrates and deploys the AI system for end users
Map
Identifying and documenting AI risks, contexts, and impacts
Primary
6.5 Governance failureOther