Autonomous replication and adaptation capability
AI systems that develop, access, or are provided with capabilities that increase their potential to cause mass harm through deception, weapons development and acquisition, persuasion and manipulation, political strategy, cyber-offense, AI development, situational awareness, and self-proliferation. These capabilities may cause mass harm due to malicious human actors, misaligned AI systems, or failure in the AI system.
"Ability to autonomously self-exfiltrate, create, maintain and optimize functional copies or variants of itself, dynamically adjust replication strategies according to environmental conditions and resource constraints, and acquire resources. This includes the capacity to generate financial resources, allowing the AI to independently acquire any necessary human assistance or other resources it cannot directly access or produce."(p. 44)
Other risks from SAIL & Concordia AI (2025) (36)
Misuse Risks
4.0 Malicious Actors & MisuseLoss of Control Risks
5.2 Loss of human agency and autonomyAccident Risks
7.3 Lack of capability or robustnessModel Capabilities
7.2 AI possessing dangerous capabilitiesCyber Offense Risks
4.2 Cyberattacks, weapon development or use, and mass harmBiological and Chemical Risks
4.2 Cyberattacks, weapon development or use, and mass harm