Lack of transparency, explainability, and trust
Challenges in understanding or explaining the decision-making processes of AI systems, which can lead to mistrust, difficulty in enforcing compliance standards or holding relevant actors accountable for harms, and the inability to identify and correct errors.
"Understanding how AI reaches conclusions or why AI systems perform specific actions motivates an entire branch of interpretability research [111], but physical embodiment raises the stakes for understanding these systems. For example, transparency of planned actions and explainability of decision-making is crucial when an AV suddenly changes lanes. A lack of transparency and explainability could lead to a lack of trust, which could become a critical and socially destabilizing issue with the widespread deployment of EAI [112–114]."(p. 6)
Other risks from Perlo et al. (2025) (12)
Economic Risks
6.0 Socioeconomic & EnvironmentalPurposeful or malicious harm
4.2 Cyberattacks, weapon development or use, and mass harmAccidental harm
Privacy Violations
2.1 Compromise of privacy by leaking or correctly inferring sensitive informationMisinformation
3.1 False or misleading informationLabour Displacement
6.2 Increased inequality and decline in employment quality