This subcategory covers the diverse effects of AI-driven personalisation and content-generation technologies on the information landscape. As AI systems become more adept at tailoring content to individual preferences, they risk creating "filter bubbles".
These are informational cocoons where individuals are predominantly exposed to news and opinions that align with their pre-existing beliefs. AI-driven filter bubbles are likely to be more pervasive and intense than those driven by traditional internet browsing and recommendation algorithms: They adapt to individual preferences in a more sophisticated manner (e.g., through reinforcement learning and analysis of user behavioral data), integrate seamlessly into daily life, and are more opaque.
An overreliance on hyper-personalized AI information sources could lead to a "splintering" of shared reality, where different groups of people have vastly different understandings of what is true or important. This is likely to be exacerbated by the proliferation of AI-enabled content generation technologies that spread misinformation at higher rates (e.g., clickbait), potentially making consumers generally distrustful of information and important institutions.
A shared sense of reality is fundamental to social solidarity. Where societal bonds are weakened, individuals may become more hostile towards opposing views. This can hinder constructive dialogue on critical collective issues like climate change and public health.
Excerpt from the MIT AI Risk Repository full report
Highly personalized AI-generated misinformation creating “filter bubbles” where individuals only see what matches their existing beliefs, undermining shared reality, weakening social cohesion and political processes.
Incident volume relative to governance coverage — each dot is one of 24 subdomains
Entity
Who or what caused the harm
Intent
Whether the harm was intentional or accidental
Timing
Whether the risk is pre- or post-deployment
Google Books began indexing AI-generated books containing phrases like 'as of my last knowledge update' which could contaminate its Ngram language tracking tool used by academics for research.
Developers: Google
Deployers: Google, Google Books
ChatGPT and other AI systems are being used to generate low-quality spam content that is overwhelming magazines, publishers, and potentially polluting the internet with AI-generated material that could degrade future AI training.
Developers: OpenAI
Deployers: OpenAI
South Korean presidential candidates used AI-generated deepfake avatars during their 2022 election campaigns to engage with voters, particularly targeting younger demographics through social media platforms.
Developers: Unknown
Deployers: Yoon Suk Yeol, Yoon Suk Yeol's Campaign
Using AI systems to conduct large-scale disinformation campaigns, malicious surveillance, or targeted and sophisticated automated censorship and propaganda, with the aim of manipulating political processes, public opinion, and behavior.
92 shared governance docs
Using AI systems to gain a personal advantage over others such as through cheating, fraud, scams, blackmail or targeted manipulation of beliefs or behavior. Examples include AI-facilitated plagiarism for research or education, impersonating a trusted or fake individual for illegitimate financial benefit, or creating humiliating or sexual imagery.
87 shared governance docs
Challenges in understanding or explaining the decision-making processes of AI systems, which can lead to mistrust, difficulty in enforcing compliance standards or holding relevant actors accountable for harms, and the inability to identify and correct errors.
69 shared governance docs
Inadequate regulatory frameworks and oversight mechanisms that fail to keep pace with AI development, leading to ineffective governance and the inability to manage AI risks appropriately.
56 shared governance docs
Requires clear labeling of AI-generated synthetic content in China through both explicit user-facing indicators and implicit metadata, mandates service providers and platforms to verify, disclose, and preserve such labels, and ensures compliance through regulatory oversight.
Calls on the federal government to promote US leadership in AI development with nationwide rules to boost innovation, secure AI frontier models, and align AI with democratic values. Urges common export policies, national security prioritization, infrastructure investment, and AI education enhancement.
Guides AI developers and users in California on compliance with existing laws governing consumer protections, data protections, civil rights protections, competition laws, and for new AI-specific laws.