Balancing AI's risks
AI systems that fail to perform reliably or effectively under varying conditions, exposing them to errors and failures that can have significant consequences, especially in critical applications or areas that require moral reasoning.
"This category constitutes more than 16% of the articles and focuses on addressing the potential risks associated with AI systems. Given the ubiquity of AI technologies, these articles explore the implications of AI risks across various contexts linked to design and unpredictability, military purposes, emergency procedures, and AI takeover."(p. 11)
Supporting Evidence (4)
"Design faults and unpredictability (9.2%). A key concern within this group revolves around design faults, in particular new processes to enhance the safety of AI systems. For instance, Siafakas (2021) investigates innovative procedures for AI scientists, while Donia and Shaw (2021) examine the role that co-designing plays in tackling ethical challenges posed by AI in healthcare. They assess the effectiveness of co-designing in managing these challenges and highlight potential pitfalls."
"Military and security purposes (3.8%). This group concerns the deployment of AI for military applications. Taddeo et al. (2021) present an ethical framework for AI use in defense, emphasizing transparency, human responsibility, and reliable AI systems. Mathew and Mathew (2021) study the ethical dilemma of deploying autonomous weapon systems in warfare and the significance of human oversight in preventing civilian casualties. Another research line explores normative and social considerations linked to this issue. Sari and Celik (2021) provide a legal evaluation of AI-based lethal weapon system attacks, addressing accountability and responsibility"
Emergency procedures: "This theme revolves around preparing for emergencies in AI systems, specifically focusing on strategies, ethical considerations, and practical measures to ensure swift and effective responses in unforeseen circumstances."
AI takeover: "This group represents articles envisioning scenarios where advanced AI systems attain autonomy and control."
Part of Design of AI
Other risks from Giarmoleo et al. (2024) (9)
Design of AI
6.1 Power centralization and unfair distribution of benefitsDesign of AI > Algorithm and data
1.1 Unfair discrimination and misrepresentationDesign of AI > Threats to human institutions and life
4.2 Cyberattacks, weapon development or use, and mass harmDesign of AI > Uniformity in the AI field
6.1 Power centralization and unfair distribution of benefitsHuman-AI interaction
5.1 Overreliance and unsafe useHuman-AI interaction > Building a human-AI environment
7.1 AI pursuing its own goals in conflict with human goals or values