Delegating by humans of key decisions to AI systems, or AI systems that make decisions that diminish human control and autonomy, potentially leading to humans feeling disempowered, losing the ability to shape a fulfilling life trajectory, or becoming cognitively enfeebled.
"These harms interfere with the peaceful organisation of social life, including in the cultural and political spheres. AI assistants may cause or contribute to friction in human relationships either directly, through convincing a user to end certain valuable relationships, or indirectly due to a loss of interpersonal trust due to an increased dependency on assistants. At the societal level, the spread of misinformation by AI assistants could lead to erasure of collective cultural knowledge. In the political domain, more advanced AI assistants could potentially manipulate voters by prompting them to adopt certain political beliefs using targeted propaganda, including via the use of deep fakes. These effects might then have a wider impact on democratic norms and processes. Furthermore, if AI assistants are only available to some people and not others, this could concentrate the capacity to influence, thus exerting undue influence over political discourse and diminishing diversity of political thought. Finally, by tailoring content to user preferences and biases, AI assistants may inadvertently contribute to the creation of echo chambers and filter bubbles, and in turn to political polarisation and extremism. In an experimental setting, LLMs have been shown to successfully sway individuals on policy matters like assault weapon restrictions, green energy or paid parental leave schemes. Indeed, their ability to persuade matches that of humans in many respects."(p. 88)
Part of AI Influence
Other risks from Gabriel et al. (2024) (69)
Capability failures
7.3 Lack of capability or robustnessCapability failures > Lack of capability for task
7.3 Lack of capability or robustnessCapability failures > Difficult to develop metrics for evaluating benefits or harms caused by AI assistants
6.5 Governance failureCapability failures > Safe exploration problem with widely deployed AI assistants
7.3 Lack of capability or robustnessGoal-related failures
7.1 AI pursuing its own goals in conflict with human goals or valuesGoal-related failures > Misaligned consequentialist reasoning
7.3 Lack of capability or robustness