BackHuman-like immoral decisions
Human-like immoral decisions
Risk Domain
AI systems that fail to perform reliably or effectively under varying conditions, exposing them to errors and failures that can have significant consequences, especially in critical applications or areas that require moral reasoning.
"If we design our machines to match human levels of ethical decision-making, such machines would then proceed to take some immoral actions (since we humans have had occasion to take immoral actions ourselves)."(p. 686)
Entity— Who or what caused the harm
Intent— Whether the harm was intentional or accidental
Timing— Whether the risk is pre- or post-deployment
Other risks from Meek et al. (2016) (17)
Unethical decision making
7.3 Lack of capability or robustnessAI systemIntentionalPost-deployment
Privacy
2.1 Compromise of privacy by leaking or correctly inferring sensitive informationHumanIntentionalPost-deployment
Human dignity/respect
5.2 Loss of human agency and autonomyOtherOtherPost-deployment
Decision making transparency
7.4 Lack of transparency or interpretabilityAI systemOtherPost-deployment
Safety
7.3 Lack of capability or robustnessAI systemOtherPost-deployment
Law abiding
7.3 Lack of capability or robustnessAI systemUnintentionalPost-deployment