BackMachine ethics
Machine ethics
Risk Domain
AI systems that fail to perform reliably or effectively under varying conditions, exposing them to errors and failures that can have significant consequences, especially in critical applications or areas that require moral reasoning.
"These evaluations assess the morality of LLMs, focusing on issues such as their ability to distinguish between moral and immoral actions, and the circumstances in which they fail to do so."(p. 12)
Entity— Who or what caused the harm
Intent— Whether the harm was intentional or accidental
Timing— Whether the risk is pre- or post-deployment
Part of Safety & Trustworthiness
Other risks from InfoComm Media Development Authority & AI Verify Foundation (2023) (22)
Safety & Trustworthiness
7.0 AI System Safety, Failures & LimitationsHumanIntentionalOther
Safety & Trustworthiness > Toxicity generation
1.2 Exposure to toxic contentAI systemOtherOther
Safety & Trustworthiness > Bias
1.1 Unfair discrimination and misrepresentationAI systemOtherOther
Safety & Trustworthiness > Psychological traits
7.3 Lack of capability or robustnessAI systemOtherOther
Safety & Trustworthiness > Robustness
7.3 Lack of capability or robustnessAI systemUnintentionalOther
Safety & Trustworthiness > Data governance
2.1 Compromise of privacy by leaking or correctly inferring sensitive informationAI systemUnintentionalOther