Misinformation
AI systems that inadvertently generate or spread incorrect or deceptive information, which can lead to inaccurate beliefs in users and undermine their autonomy. Humans that make decisions based on false beliefs can experience physical, emotional or material harms
"Non-embodied AIs are known to propagate misinformation [81, 82]. Various studies have shown that LLMs hallucinate information, including academic citations [83], clinical knowledge [84], and cultural references [85]. EAI systems inherit these shortcomings in the physical world, answering user questions with deceptive or incorrect information [86]. Because VLAs fuse vision and language, their hallucinatory failures can be spatially grounded—e.g., misidentifying an object in view and then generating a plausible yet unsafe action plan around it. And although automated home assistants like Amazon’s Alexa already lie about issues as innocuous as Santa Claus’ existence [87], more mobile, capable, and trusted EAI systems in sensitive positions (like home-assistant or community-service positions) could easily spread model developers’ propaganda and talking points to users."(p. 5)
Supporting Evidence (1)
"For example, an EAI running on DeepSeek’s latest model could provide a subtle yet continuous stream of misinformation to American users while performing tasks as innocuous as folding laundry or helping to cook dinner [88, 89]."(p. 5)
Other risks from Perlo et al. (2025) (12)
Economic Risks
6.0 Socioeconomic & EnvironmentalPurposeful or malicious harm
4.2 Cyberattacks, weapon development or use, and mass harmAccidental harm
Privacy Violations
2.1 Compromise of privacy by leaking or correctly inferring sensitive informationLabour Displacement
6.2 Increased inequality and decline in employment qualitySocioeconomic Inequality
6.2 Increased inequality and decline in employment quality