Malicious use and abuse (mass surveillance)
Using AI systems to conduct large-scale disinformation campaigns, malicious surveillance, or targeted and sophisticated automated censorship and propaganda, with the aim of manipulating political processes, public opinion, and behavior.
"Generative AI facilitates the automation of data analysis, offering numerous benefits, such as increased speed and the ability to process large volumes of information efficiently. Such ability significantly reduces the costs of processing unprecedented amounts of data quickly and simplifies the analysis of large-scale data related to individuals’ behaviors and beliefs. Moreover, it enhances the capability to analyze both textual and visual communications efficiently. Consequently, generative AI models improve the efficiency of real-time monitoring and censorship of social media content."(p. 76)
Supporting Evidence (1)
"These capabilities also enhance the potential for real-time surveillance of large populations, raising concerns about privacy and misuse. Authoritarian and even democratic governments might find the surveillance capabilities offered by AI technology appealing to monitor public spaces, among other things.340 Specifically, generative AI may enable authoritarian regimes to collect, analyze, and leverage vast amounts of information, thereby facilitating control over their populations on an unprecedented scale."(p. 76)
Other risks from G'sell (2024) (33)
Technical and operational risks
7.3 Lack of capability or robustnessTechnical and operational risks > Technical vulnerabilities (Robustness - unexpected behaviour)
7.3 Lack of capability or robustnessTechnical and operational risks > Technical vulnerabilities (Robustness - vulnerability to jailbreaking
2.2 AI system security vulnerabilities and attacksTechnical and operational risks > Technical vulnerabilities (The risk of misalignment)
7.1 AI pursuing its own goals in conflict with human goals or valuesTechnical and operational risks > Factually incorrect content (inaccuracies and fabricated sources)
3.1 False or misleading informationTechnical and operational risks > Opacity (the black box problem)
7.4 Lack of transparency or interpretability