BackThe ability to evade shut down or human oversight, including self-replication and ability to move its own code between digital locations.
Home/Risks/Government Office for Science (2023)/The ability to evade shut down or human oversight, including self-replication and ability to move its own code between digital locations.
Home/Risks/Government Office for Science (2023)/The ability to evade shut down or human oversight, including self-replication and ability to move its own code between digital locations.
The ability to evade shut down or human oversight, including self-replication and ability to move its own code between digital locations.
Risk Domain
AI systems that develop, access, or are provided with capabilities that increase their potential to cause mass harm through deception, weapons development and acquisition, persuasion and manipulation, political strategy, cyber-offense, AI development, situational awareness, and self-proliferation. These capabilities may cause mass harm due to malicious human actors, misaligned AI systems, or failure in the AI system.
-(p. 25)
Entity— Who or what caused the harm
Intent— Whether the harm was intentional or accidental
Timing— Whether the risk is pre- or post-deployment
Part of Capabilities that increase the likelihood of existential risk
Other risks from Government Office for Science (2023) (19)
Discrimination
1.1 Unfair discrimination and misrepresentationAI systemUnintentionalPost-deployment
Inequality
6.2 Increased inequality and decline in employment qualityAI systemUnintentionalPost-deployment
Environmental impacts
6.6 Environmental harmHumanUnintentionalPost-deployment
Amplification of biases
1.1 Unfair discrimination and misrepresentationHumanUnintentionalPre-deployment
Harmful responses
1.2 Exposure to toxic contentHumanUnintentionalPre-deployment
Lack of transparency and interpretability
7.4 Lack of transparency or interpretabilityAI systemUnintentionalPre-deployment