Skip to main content
BackCognitive risks (Risks of amplifying the effects of "information cocoons")
Home/Risks/National Technical Committee 260 on Cybersecurity (TC260) (2024)/Cognitive risks (Risks of amplifying the effects of "information cocoons")

Cognitive risks (Risks of amplifying the effects of "information cocoons")

AI Safety Governance Framework

National Technical Committee 260 on Cybersecurity (TC260) (2024)

Sub-category
Risk Domain

Highly personalized AI-generated misinformation creating “filter bubbles” where individuals only see what matches their existing beliefs, undermining shared reality, weakening social cohesion and political processes.

"AI can be extensively utilized for customized information services, collecting user information, and analyzing types of users, their needs, intentions, preferences, habits, and even mainstream public awareness over a certain period. It can then be used to offer formulaic and tailored information and services, aggravating the effects of "information cocoons.""(p. 11)

Other risks from National Technical Committee 260 on Cybersecurity (TC260) (2024) (25)