Skip to main content
Home/Risks/Gabriel et al. (2024)/Entrenching specific ideologies

Entrenching specific ideologies

The Ethics of Advanced AI Assistants

Gabriel et al. (2024)

Sub-category
Risk Domain

Highly personalized AI-generated misinformation creating “filter bubbles” where individuals only see what matches their existing beliefs, undermining shared reality, weakening social cohesion and political processes.

"AI assistants may provide ideologically biased or otherwise partial information in attempting to align to user expectations. In doing so, AI assistants may reinforce people’s pre-existing biases and compromise productive political debate."(p. 164)

Part of Misinformation risks

Other risks from Gabriel et al. (2024) (69)