Safety
AI systems acting in conflict with human goals or values, especially the goals of designers or users, or ethical standards. These misaligned behaviors may be introduced by humans during design and development, such as through reward hacking and goal misgeneralisation, or may result from AI using dangerous capabilities such as manipulation, deception, situational awareness to seek power, self-proliferate, or achieve other goals.
The actions of a learning model may easily hurt humans in both explicit and implicit manners...several algorithms based on Asimov’s laws have been proposed that try to judge the output actions of an agent considering the safety of humans
Other risks from Saghiri et al. (2022) (15)
Energy Consumption
6.6 Environmental harmData Issues
1.1 Unfair discrimination and misrepresentationRobustness and Reliability
7.3 Lack of capability or robustnessCheating and Deception
7.2 AI possessing dangerous capabilitiesSecurity
2.2 AI system security vulnerabilities and attacksPrivacy
2.1 Compromise of privacy by leaking or correctly inferring sensitive information