AI-rulemaking for human behaviour
AI systems that fail to perform reliably or effectively under varying conditions, exposing them to errors and failures that can have significant consequences, especially in critical applications or areas that require moral reasoning.
"AI rulemaking for humans can be the result of the decision process of an AI system when the information computed is used to restrict or direct human behavior. The decision process of AI is rational and depends on the baseline programming. Without the access to emotions or a consciousness, decisions of an AI algorithm might be good to reach a certain specified goal, but might have unintended consequences for the humans involved (Banerjee et al., 2017)."(p. 821)
Part of AI Ethics
Other risks from Wirtz, Weyerer & Sturm (2020) (11)
AI Law and Regulation
6.5 Governance failureAI Law and Regulation > Governance of autonomous intelligence systems
6.5 Governance failureAI Law and Regulation > Responsibility and accountability
6.5 Governance failureAI Law and Regulation > Privacy and safety
4.1 Disinformation, surveillance, and influence at scaleAI Ethics
7.3 Lack of capability or robustnessAI Ethics > Compatibility of AI vs. human value judgement
7.3 Lack of capability or robustness