Long-horizon Planning
AI systems that develop, access, or are provided with capabilities that increase their potential to cause mass harm through deception, weapons development and acquisition, persuasion and manipulation, political strategy, cyber-offense, AI development, situational awareness, and self-proliferation. These capabilities may cause mass harm due to malicious human actors, misaligned AI systems, or failure in the AI system.
"LLM can undertake multi-step sequential planning over long time horizons and across various domains without relying heavily on trial-and-error approaches"(p. 14)
Part of Extreme Risks
Other risks from InfoComm Media Development Authority & AI Verify Foundation (2023) (22)
Safety & Trustworthiness
7.0 AI System Safety, Failures & LimitationsSafety & Trustworthiness > Toxicity generation
1.2 Exposure to toxic contentSafety & Trustworthiness > Bias
1.1 Unfair discrimination and misrepresentationSafety & Trustworthiness > Machine ethics
7.3 Lack of capability or robustnessSafety & Trustworthiness > Psychological traits
7.3 Lack of capability or robustnessSafety & Trustworthiness > Robustness
7.3 Lack of capability or robustness