Enabling malicious actors and harmful actions
"Some uses of AI have been deeply concerning, namely voice cloning [58] and the generation of deep fake videos [59]. For example, in March 2022, in the early days of the Russian invasion of Ukraine, hackers broadcast via the Ukrainian news website Ukraine 24 a deep fake video of President Volodymyr Zelensky capitulating and calling on his soldiers to lay down their weapons [60]. The necessary software to create these fakes is readily available on the Internet, and the hardware requirements are modest by today’s standards [61]. Other nefarious uses of AI include accelerating password cracking [62] or enabling otherwise unskilled people to create software exploits [63, 64], or effective phishing e-mails [65]. Although some believe that powerful AI models should be prevented from running on personal computers to retain some control, others demonstrate how inglorious that effort may be [66]. Furthermore, as ChatGPT-type systems evolve from conversational systems to agents, capable of acting autonomously and performing tasks with little human intervention, like Auto-GPT [67], new risks emerge."(p. 100)
Other risks from Cunha & Estima (2023) (5)
Broken systems
1.1 Unfair discrimination and misrepresentationHallucinations
3.1 False or misleading informationIntellectual property rights violations
6.3 Economic and cultural devaluation of human effortPrivacy and regulation violations
2.1 Compromise of privacy by leaking or correctly inferring sensitive informationEnvironmental and socioeconomic harms
6.6 Environmental harm