Skip to main content
Home/Risks/Gipiškis2024/Agency (Self-Proliferation)

Agency (Self-Proliferation)

Category
Risk Domain

AI systems that develop, access, or are provided with capabilities that increase their potential to cause mass harm through deception, weapons development and acquisition, persuasion and manipulation, political strategy, cyber-offense, AI development, situational awareness, and self-proliferation. These capabilities may cause mass harm due to malicious human actors, misaligned AI systems, or failure in the AI system.

"An AI system can self-proliferate if it can copy itself and its constituent com- ponents (including its model weights, scaffolding structure, etc.) outside of its local environment [45]. This can include the AI system copying itself within the same data center, local network, or across external networks [106]. The self-proliferation of an AI system can include acquisition of financial re- sources to pay for computational resources via work or theft, the discovery or exploitation of security vulnerabilities in software running on publicly accessible servers, and persuasion of humans [12, 125]. Self-proliferation may be initiated by a malicious actor (e.g., by model poison- ing), or by the model itself."(p. 33)

Other risks from Gipiškis2024 (144)