Skip to main content
Home/Risks/Perlo et al. (2025)/Lack of transparency, explainability, and trust

Lack of transparency, explainability, and trust

Embodied AI: Emerging Risks and Opportunities for Policy Action

Perlo et al. (2025)

Sub-category
Risk Domain

Challenges in understanding or explaining the decision-making processes of AI systems, which can lead to mistrust, difficulty in enforcing compliance standards or holding relevant actors accountable for harms, and the inability to identify and correct errors.

"Understanding how AI reaches conclusions or why AI systems perform specific actions motivates an entire branch of interpretability research [111], but physical embodiment raises the stakes for understanding these systems. For example, transparency of planned actions and explainability of decision-making is crucial when an AV suddenly changes lanes. A lack of transparency and explainability could lead to a lack of trust, which could become a critical and socially destabilizing issue with the widespread deployment of EAI [112–114]."(p. 6)

Other risks from Perlo et al. (2025) (12)