BackBuilding an AI able to adapt to humans
Building an AI able to adapt to humans
Risk Domain
Social and economic inequalities caused by widespread use of AI, such as by automating jobs, reducing the quality of employment, or producing exploitative dependencies between workers and their employers.
"This category involves almost 9% of the articles and deals with ethical concerns arising from AI's capacity to interact with humans in the workplace."(p. 16)
Entity— Who or what caused the harm
Intent— Whether the harm was intentional or accidental
Timing— Whether the risk is pre- or post-deployment
Supporting Evidence (2)
1.
Effective human-AI interaction: "This research line addresses the ethical design of human-AI interactions. This research line addresses the ethical design of human–AI interactions. Miller (2019) contemplates the symbiotic relationship between humans and AI, discusses the impact of AI on various professions, and explores the concept of braincomputer interfaces. Gerdes (2018) highlights the need for inclusive ethical AI design, aligning AI with human values, and promoting moral growth in AI professionals. Another research line examines the frameworks needed to ensure an ethical human–AI interaction. Trunk et al. (2020) provide insights into integrating AI into organizational decision-making in situations of uncertainty. Like other researchers, they also emphasize the need for ethical frameworks within the context of education. Boni (2021) highlights the ethical dimension of human–AI collaboration, discussing the need for an adequate regulatory framework, human oversight, and AI digital literacy towards the ethical use of AI technologies."(p. 16)
2.
Dialogue systems: "Under this section, scholars investigate user perceptions and expectations of AI in the workplace. Prakash and Das (2020) focus on user perceptions of AI-based conversational agents in mental healthcare services, analyzing factors influencing their adoption and use. Grimes et al. (2021) explore how users' expectations of conversational agents impact their evaluation, suggesting that user-formed expectations can influence perceptions beyond actual agent performance. Terblanche (2020) presents a design framework for creating AI coaches in organizational settings while adhering to coaching standards, ethics, and theoretical models. Tekin (2021) critically examines smartphone psychotherapy chatbots for mental illness diagnosis and treatment and discusses challenges related to early diagnosis, stigma, and global access to mental healthcare. Borau et al. (2021) investigate the perception of gendered chatbots, highlighting ethical questions regarding the humanization of AI based on gendered characteristics. Other scholars deal with societal implications of AI dialog systems. Mulvenna et al. (2021) explore ethical issues related to digital phenotyping, democratizing machine learning, and AI in digital health technologies. Berberich et al. (2020) propose incorporating the concept of harmony from East Asian cultures into the ethical discussion on AI, suggesting that by harmonizing AI, it will make intelligent systems tactful and sensitive to specific contexts."(p. 16)
Part of Human-AI interaction
Other risks from Giarmoleo et al. (2024) (9)
Design of AI
6.1 Power centralization and unfair distribution of benefitsHumanIntentionalPre-deployment
Design of AI > Algorithm and data
1.1 Unfair discrimination and misrepresentationHumanIntentionalPre-deployment
Design of AI > Balancing AI's risks
7.3 Lack of capability or robustnessOtherOtherOther
Design of AI > Threats to human institutions and life
4.2 Cyberattacks, weapon development or use, and mass harmOtherOtherOther
Design of AI > Uniformity in the AI field
6.1 Power centralization and unfair distribution of benefitsHumanOtherOther
Human-AI interaction
5.1 Overreliance and unsafe useOtherOtherOther