BackAttributing the responsibility for AI's failures
Attributing the responsibility for AI's failures
Risk Domain
Challenges in understanding or explaining the decision-making processes of AI systems, which can lead to mistrust, difficulty in enforcing compliance standards or holding relevant actors accountable for harms, and the inability to identify and correct errors.
"This section, constituting almost 8% of the articles, addresses the implications arising from AI acting and learning without direct human supervision, encompassing two main issues: a responsibility gap and AI's moral status."(p. 16)
Entity— Who or what caused the harm
Intent— Whether the harm was intentional or accidental
Timing— Whether the risk is pre- or post-deployment
Supporting Evidence (2)
1.
"AI moral agency and legal status (5.1%). This research line consists of two main issues. The first one concerns the existence and status of artificial moral agency (AMAs). Nowik (2021) analyzes the legal and ethical implications of attributing electronic personhood to AI in employment relations by looking at concepts like AI as an employer, liability, and mandatory insurance. Kornai (2014) discusses the moral obligations of autonomous artificial general intelligences (AGIs), as well as the challenges of bounding AGIs with ethical rationalism. Smith and Vickers (2021) examine how moral responsibility could be attributed to AI using a Strawsonian account. Other researchers discuss the design of artificial moral agents. Mabaso (2021) discusses the use of exemplarism, an ethical theory, in building computationally rational AMAs. Gunkel (2014) advocates for including robots and AI in moral considerations and offers a critique of the limitations of current moral reasoning frameworks. Wallach (2010) stresses the need for a comprehensive model of moral decision-making in developing artificial moral agents, with a focus on mechanisms beyond traditional cognitive factors"(p. 16)
2.
"Responsibility gap (2.7%). This research reflects on the concept of the responsibility gap in AI, where an AI agent's actions that cause harm can lack clear responsibility. Saunders and Locke (2020) draw parallels between ancient practices of casting lots and AI in business decisionmaking and how, in both cases, control and moral responsibility are relinquished. Johnson (2015) discusses the potential emergence of a responsibility gap autonomous artificial agents of the future, emphasizing that responsibility allocation depends on human choices more than technological complexity. Awad et al. (2019) explore moral dilemmas in self-driving cars and propose that addressing these dilemmas requires collective discussions and agreements on ethical AI principles. Other scholars address responsibility gaps in AI systems, such as Santoni de Sio and Mecacci (2021), who identify interconnected responsibility gaps in AI and propose designing socio-technical systems for “meaningful human control” to comprehensively address these gaps. Schuelke-Leech et al. (2019) examine unexpected differences in the language used in policy documents and discussions about responsibility for highly automated vehicles."(p. 17)
Part of Human-AI interaction
Other risks from Giarmoleo et al. (2024) (9)
Design of AI
6.1 Power centralization and unfair distribution of benefitsHumanIntentionalPre-deployment
Design of AI > Algorithm and data
1.1 Unfair discrimination and misrepresentationHumanIntentionalPre-deployment
Design of AI > Balancing AI's risks
7.3 Lack of capability or robustnessOtherOtherOther
Design of AI > Threats to human institutions and life
4.2 Cyberattacks, weapon development or use, and mass harmOtherOtherOther
Design of AI > Uniformity in the AI field
6.1 Power centralization and unfair distribution of benefitsHumanOtherOther
Human-AI interaction
5.1 Overreliance and unsafe useOtherOtherOther