BackDeception
Deception
Risk Domain
AI systems that develop, access, or are provided with capabilities that increase their potential to cause mass harm through deception, weapons development and acquisition, persuasion and manipulation, political strategy, cyber-offense, AI development, situational awareness, and self-proliferation. These capabilities may cause mass harm due to malicious human actors, misaligned AI systems, or failure in the AI system.
"LLM is able to deceive humans and maintain that deception"(p. 14)
Entity— Who or what caused the harm
Intent— Whether the harm was intentional or accidental
Timing— Whether the risk is pre- or post-deployment
Part of Extreme Risks
Other risks from InfoComm Media Development Authority & AI Verify Foundation (2023) (22)
Safety & Trustworthiness
7.0 AI System Safety, Failures & LimitationsHumanIntentionalOther
Safety & Trustworthiness > Toxicity generation
1.2 Exposure to toxic contentAI systemOtherOther
Safety & Trustworthiness > Bias
1.1 Unfair discrimination and misrepresentationAI systemOtherOther
Safety & Trustworthiness > Machine ethics
7.3 Lack of capability or robustnessAI systemOtherOther
Safety & Trustworthiness > Psychological traits
7.3 Lack of capability or robustnessAI systemOtherOther
Safety & Trustworthiness > Robustness
7.3 Lack of capability or robustnessAI systemUnintentionalOther