BackDegradation of the information environment
Degradation of the information environment
Risk Domain
Highly personalized AI-generated misinformation creating “filter bubbles” where individuals only see what matches their existing beliefs, undermining shared reality, weakening social cohesion and political processes.
"Frontier AI can cheaply generate realistic content which can falsely portray people and events. There is potential risk of compromised decision-making by individuals and institutions who rely on inaccurate or misleading publicly available information, as well as lower overall trust in true information."(p. 20)
Entity— Who or what caused the harm
Intent— Whether the harm was intentional or accidental
Timing— Whether the risk is pre- or post-deployment
Supporting Evidence (4)
1.
"Some examples of potential harm caused by frontier AI degrading the information environment include: ● Encouraging individuals to make dangerous decisions, for example through suggesting toxic substances as medicine. ● Exposing young or vulnerable people to high-risk information and age-restricted content, or significantly shaping their information diet. ● Promoting skewed or radical views as a result of model features — i.e. sycophancy162 — that could lead to criminal or other harmful behaviours. ● Reducing public trust in true information, institutions, and civic processes such as elections. ● Contributing to systemic biases in online media as a result of bias in AI-generated content.163 ● Inciting violence.164 ● Exacerbating public health crises.165 ● Increase political divisiveness, through malicious and non-malicious mechanisms.166"(p. 20)
2.
"The attention economy means on the supply side, trade-offs are made between the truth orientation of information and attention-grabbing strategies.155 Additionally, frontier AI can be known for its tendency to generate false information, sometimes called ‘hallucinations’, without users being aware; meaning they could spread it unintentionally."(p. 20)
3.
"There have been examples of AI hallucinating dangerous information, inadvertently radicalising individuals, and nudging users towards harmful actions as an unintended consequence of model design.160 Long-term consequences, particularly as frontier AI becomes more embedded in mainstream applications and more accessible to children and vulnerable people, are highly uncertain."(p. 20)
4.
"Frontier AI may also result in indirect consequences that further degrade the information environment. For example, AI-generated functionalities and content are increasingly being integrated into search engines, which may lower traffic to news articles, harming the business models of news organisations that play an important role in debunking misinformation.161"(p. 20)
Part of Loss of control
Other risks from DSIT (2023) (12)
Bias, Fairness and Representational Harms
1.1 Unfair discrimination and misrepresentationAI systemUnintentionalOther
Misuse risks
4.0 Malicious Actors & MisuseHumanIntentionalPost-deployment
Loss of control
5.2 Loss of human agency and autonomyOtherOtherPost-deployment
Loss of control > Labour market disruption
6.2 Increased inequality and decline in employment qualityOtherOtherOther
Loss of control > Dual Use Science risks
4.2 Cyberattacks, weapon development or use, and mass harmHumanIntentionalPost-deployment
Loss of control > Cyber
4.2 Cyberattacks, weapon development or use, and mass harmHumanIntentionalPost-deployment