This page is still being polished. If you have thoughts, please share them via the feedback form.
Data on this page is preliminary and may change. Please do not share or cite these figures publicly.
Structured analysis to identify, characterize, and prioritize potential harms and risks.
Also in Risk & Assurance
Our e-ERM framework extends traditional ERM approaches to balance both economic and ethical considerations. The RAT implements the risk management cycle that gives equal weight to ethical implications alongside financial, operational, and strategic factors when identifying, assessing, and mitigating risks. This holistic approach addresses financial concerns and ethical limitations across the AIS lifecycle
Reasoning
Framework for identifying and assessing risks by balancing ethical and economic factors across the AI system lifecycle.
AI Ethical Practices
To address this gap, organizations must ground AI ethical principles with actionable practices that relate to business applications.
2.4.2 Design StandardsDynamic Monitoring of AI Systems
We identified three essential components to support real-time reactions to occurrences of AI risks: mechanisms for continuous monitoring and sensing (using the CMS), an agile risk assessment tool and approach (part of the RAT), and the RRD to record the evolving relationships between risks and the best practices to mitigate them.
2.3.3 Monitoring & LoggingDesign and implement ethical solutions
The e-ERM design proposes using the RRD to link the RAT’s risk identification and assessment process with potential emerging best practice mitigation approaches. The selection of the most appropriate approach is determined by considering what AI capability is used in the system, and therefore, what AI ethical risk category is impacted
2.4.2 Design StandardsDesigning an Enhanced Enterprise Risk Management System to Mitigate Ethical Risks of Artificial Intelligence Applications
McGrath, Quintin P.; Hevner, Alan R.; de Vreede, Gert-Jan (2025)
The introduction of artificial intelligence (AI) capabilities in business applications provides substantial benefits but requires organizations to manage critical AI ethical risks. We survey a range of large organizations on their use of enterprise risk management (ERM) systems to predict and mitigate the ethical risks of AI. Four serious gaps in current ERM systems are identified: AI ethical principles do not translate effectively to ethical practices; real-time monitoring of AI ethical risks is needed; ERM systems emphasize economic, not ethical risks; and when ethical risks are identified, no ready solutions are available for remedy. To address these gaps, we propose a proactive approach to managing ethical risks by extending the capabilities of current ERM systems. An enhanced ERM system framework is designed and evaluated by subject matter expert focus groups. We conclude with observations and future research directions on the need for more aggressive proethical management oversight as organizations move to ubiquitous use of AI-driven business applications. © 1988-2012 IEEE.
Other (multiple stages)
Applies across multiple lifecycle stages
Deployer
Entity that integrates and deploys the AI system for end users
Measure
Quantifying, testing, and monitoring identified AI risks
Primary
6.5 Governance failure