Governance of autonomous intelligence systems
Inadequate regulatory frameworks and oversight mechanisms that fail to keep pace with AI development, leading to ineffective governance and the inability to manage AI risks appropriately.
"Governance of autonomous intelligence systemaddresses the question of how to control autonomous systems in general. Since nowadays it is very difficult to conceive automated decisions based on AI, the latter is often referred to as a ‘black box’ (Bleicher, 2017). This black box may take unforeseeable actions and cause harm to humanity."(p. 820)
Supporting Evidence (1)
"Situations can get even worse when the AI becomes autonomous enough to pursue its own goals, even if this means harm to individuals or humanity (Lin et al., 2008). Examples like this give rise to the questions of transparency and accountability for AI systems.:(p. 820)
Part of AI Law and Regulation
Other risks from Wirtz, Weyerer & Sturm (2020) (11)
AI Law and Regulation
6.5 Governance failureAI Law and Regulation > Responsibility and accountability
6.5 Governance failureAI Law and Regulation > Privacy and safety
4.1 Disinformation, surveillance, and influence at scaleAI Ethics
7.3 Lack of capability or robustnessAI Ethics > AI-rulemaking for human behaviour
7.3 Lack of capability or robustnessAI Ethics > Compatibility of AI vs. human value judgement
7.3 Lack of capability or robustness