Controllability
AI systems acting in conflict with human goals or values, especially the goals of designers or users, or ethical standards. These misaligned behaviors may be introduced by humans during design and development, such as through reward hacking and goal misgeneralisation, or may result from AI using dangerous capabilities such as manipulation, deception, situational awareness to seek power, self-proliferate, or achieve other goals.
In the era of superintelligence, the agents will be difficult to control for humans... this problem is not solvable considering safety issues, and will be more severe by increasing the autonomy of AI-based agents. Therefore, because of the assumed properties of HLI-based agents, we might be prepared for machines that are definitely possible to be uncontrollable in some situations
Other risks from Saghiri et al. (2022) (15)
Energy Consumption
6.6 Environmental harmData Issues
1.1 Unfair discrimination and misrepresentationRobustness and Reliability
7.3 Lack of capability or robustnessCheating and Deception
7.2 AI possessing dangerous capabilitiesSecurity
2.2 AI system security vulnerabilities and attacksPrivacy
2.1 Compromise of privacy by leaking or correctly inferring sensitive information