This page is still being polished. If you have thoughts, please share them via the feedback form.
Data on this page is preliminary and may change. Please do not share or cite these figures publicly.
Internal policies, content safety guidelines, and ethical design principles governing system creation.
Also in Engineering & Development
The e-ERM design proposes using the RRD to link the RAT’s risk identification and assessment process with potential emerging best practice mitigation approaches. The selection of the most appropriate approach is determined by considering what AI capability is used in the system, and therefore, what AI ethical risk category is impacted
For example, if the trigger is an ethical risk associated with facial recognition, the risk category may relate to privacy, an element of the nonmaleficence AI ethical principle. A range of best practices could include the removal of the facial recognition capability from the AIS, constraining the use of facial recognition, or adding additional controls.
Reasoning
Framework links risk identification and assessment to mitigation selection based on AI ethical categories and capabilities.
AI Ethical Practices
To address this gap, organizations must ground AI ethical principles with actionable practices that relate to business applications.
2.4.2 Design StandardsDynamic Monitoring of AI Systems
We identified three essential components to support real-time reactions to occurrences of AI risks: mechanisms for continuous monitoring and sensing (using the CMS), an agile risk assessment tool and approach (part of the RAT), and the RRD to record the evolving relationships between risks and the best practices to mitigate them.
2.3.3 Monitoring & LoggingBalance economic and ethical risks
Our e-ERM framework extends traditional ERM approaches to balance both economic and ethical considerations. The RAT implements the risk management cycle that gives equal weight to ethical implications alongside financial, operational, and strategic factors when identifying, assessing, and mitigating risks. This holistic approach addresses financial concerns and ethical limitations across the AIS lifecycle
2.2.1 Risk AssessmentDesigning an Enhanced Enterprise Risk Management System to Mitigate Ethical Risks of Artificial Intelligence Applications
McGrath, Quintin P.; Hevner, Alan R.; de Vreede, Gert-Jan (2025)
The introduction of artificial intelligence (AI) capabilities in business applications provides substantial benefits but requires organizations to manage critical AI ethical risks. We survey a range of large organizations on their use of enterprise risk management (ERM) systems to predict and mitigate the ethical risks of AI. Four serious gaps in current ERM systems are identified: AI ethical principles do not translate effectively to ethical practices; real-time monitoring of AI ethical risks is needed; ERM systems emphasize economic, not ethical risks; and when ethical risks are identified, no ready solutions are available for remedy. To address these gaps, we propose a proactive approach to managing ethical risks by extending the capabilities of current ERM systems. An enhanced ERM system framework is designed and evaluated by subject matter expert focus groups. We conclude with observations and future research directions on the need for more aggressive proethical management oversight as organizations move to ubiquitous use of AI-driven business applications. © 1988-2012 IEEE.
Verify and Validate
Testing, evaluating, auditing, and red-teaming the AI system
Deployer
Entity that integrates and deploys the AI system for end users
Manage
Prioritising, responding to, and mitigating AI risks
Primary
6.5 Governance failure