Skip to main content
Home/Risks/Wirtz, Weyerer & Sturm (2020)/AI-rulemaking for human behaviour

AI-rulemaking for human behaviour

The Dark Sides of Artificial Intelligence: An Integrated AI Governance Framework for Public Administration

Wirtz, Weyerer & Sturm (2020)

Sub-category
Risk Domain

AI systems that fail to perform reliably or effectively under varying conditions, exposing them to errors and failures that can have significant consequences, especially in critical applications or areas that require moral reasoning.

"AI rulemaking for humans can be the result of the decision process of an AI system when the information computed is used to restrict or direct human behavior. The decision process of AI is rational and depends on the baseline programming. Without the access to emotions or a consciousness, decisions of an AI algorithm might be good to reach a certain specified goal, but might have unintended consequences for the humans involved (Banerjee et al., 2017)."(p. 821)

Part of AI Ethics

Other risks from Wirtz, Weyerer & Sturm (2020) (11)