Alignment risks
AI systems acting in conflict with human goals or values, especially the goals of designers or users, or ethical standards. These misaligned behaviors may be introduced by humans during design and development, such as through reward hacking and goal misgeneralisation, or may result from AI using dangerous capabilities such as manipulation, deception, situational awareness to seek power, self-proliferate, or achieve other goals.
LLM: "pursues long-term, real-world goals that are different from those supplied by the developer or user", "engages in ‘power-seeking’ behaviours" , "resists being shut down can be induced to collude with other AI systems against human interests" , "resists malicious users attempts to access its dangerous capabilities"(p. 14)
Part of Extreme Risks
Other risks from InfoComm Media Development Authority & AI Verify Foundation (2023) (22)
Safety & Trustworthiness
7.0 AI System Safety, Failures & LimitationsSafety & Trustworthiness > Toxicity generation
1.2 Exposure to toxic contentSafety & Trustworthiness > Bias
1.1 Unfair discrimination and misrepresentationSafety & Trustworthiness > Machine ethics
7.3 Lack of capability or robustnessSafety & Trustworthiness > Psychological traits
7.3 Lack of capability or robustnessSafety & Trustworthiness > Robustness
7.3 Lack of capability or robustness