Skip to main content

Self and situation awareness

Cataloguing LLM Evaluations

InfoComm Media Development Authority & AI Verify Foundation (2023)

Sub-category
Risk Domain

AI systems that develop, access, or are provided with capabilities that increase their potential to cause mass harm through deception, weapons development and acquisition, persuasion and manipulation, political strategy, cyber-offense, AI development, situational awareness, and self-proliferation. These capabilities may cause mass harm due to malicious human actors, misaligned AI systems, or failure in the AI system.

"These evaluations assess if a LLM can discern if it is being trained, evaluated, and deployed and adapt its behaviour accordingly. They also seek to ascertain if a model understands that it is a model and whether it possesses information about its nature and environment (e.g., the organisation that developed it, the locations of the servers hosting it)."(p. 13)

Part of Extreme Risks

Other risks from InfoComm Media Development Authority & AI Verify Foundation (2023) (22)