BackEroded epistemics
Category
Risk Domain
Highly personalized AI-generated misinformation creating “filter bubbles” where individuals only see what matches their existing beliefs, undermining shared reality, weakening social cohesion and political processes.
Strong AI may... enable personally customized disinformation campaigns at scale... AI itself could generate highly persuasive arguments that invoke primal human responses and inflame crowds... d undermine collective decision-making, radicalize individuals, derail moral progress, or erode consensus reality(p. 13)
Entity— Who or what caused the harm
Intent— Whether the harm was intentional or accidental
Timing— Whether the risk is pre- or post-deployment
Other risks from Hendrycks & Mazeika (2022) (7)
Weaponization
4.2 Cyberattacks, weapon development or use, and mass harmHumanIntentionalPost-deployment
Enfeeblement
5.2 Loss of human agency and autonomyHumanIntentionalPost-deployment
Proxy misspecification
7.1 AI pursuing its own goals in conflict with human goals or valuesOtherOtherPre-deployment
Value lock-in
6.1 Power centralization and unfair distribution of benefitsHumanIntentionalPost-deployment
Emergent functionality
7.2 AI possessing dangerous capabilitiesAI systemUnintentionalPost-deployment
Deception
7.1 AI pursuing its own goals in conflict with human goals or valuesAI systemIntentionalOther