Skip to main content
Home/Risks/Cunha & Estima (2023)/Enabling malicious actors and harmful actions

Enabling malicious actors and harmful actions

Navigating the Landscape of AI Ethics and Responsibility

Cunha & Estima (2023)

Category

"Some uses of AI have been deeply concerning, namely voice cloning [58] and the generation of deep fake videos [59]. For example, in March 2022, in the early days of the Russian invasion of Ukraine, hackers broadcast via the Ukrainian news website Ukraine 24 a deep fake video of President Volodymyr Zelensky capitulating and calling on his soldiers to lay down their weapons [60]. The necessary software to create these fakes is readily available on the Internet, and the hardware requirements are modest by today’s standards [61]. Other nefarious uses of AI include accelerating password cracking [62] or enabling otherwise unskilled people to create software exploits [63, 64], or effective phishing e-mails [65]. Although some believe that powerful AI models should be prevented from running on personal computers to retain some control, others demonstrate how inglorious that effort may be [66]. Furthermore, as ChatGPT-type systems evolve from conversational systems to agents, capable of acting autonomously and performing tasks with little human intervention, like Auto-GPT [67], new risks emerge."(p. 100)

Other risks from Cunha & Estima (2023) (5)