Loss of control of autonomous systems and unforeseen behaviour due to lack of transparency and self-programming/ reprogramming
AI systems acting in conflict with human goals or values, especially the goals of designers or users, or ethical standards. These misaligned behaviors may be introduced by humans during design and development, such as through reward hacking and goal misgeneralisation, or may result from AI using dangerous capabilities such as manipulation, deception, situational awareness to seek power, self-proliferate, or achieve other goals.
Other risks from Wirtz, Weyerer & Kehl (2022) (37)
Informational and Communicational AI Risks
4.1 Disinformation, surveillance, and influence at scaleInformational and Communicational AI Risks > Manipulation and control of information provision (e.g., personalised adds, filtered news)
4.1 Disinformation, surveillance, and influence at scaleInformational and Communicational AI Risks > Disinformation and computational propaganda
4.1 Disinformation, surveillance, and influence at scaleInformational and Communicational AI Risks > Censorship of opinions expressed in the Internet restricts freedom of expression
5.2 Loss of human agency and autonomyInformational and Communicational AI Risks > Endangerment of data protection through AI cyberattacks
4.2 Cyberattacks, weapon development or use, and mass harmEconomic AI Risks
6.2 Increased inequality and decline in employment quality