Skip to main content
Home/Risks/Weidinger et al. (2022)/Disseminating false or misleading information

Disseminating false or misleading information

Taxonomy of Risks posed by Language Models

Weidinger et al. (2022)

Sub-category
Risk Domain

AI systems that inadvertently generate or spread incorrect or deceptive information, which can lead to inaccurate beliefs in users and undermine their autonomy. Humans that make decisions based on false beliefs can experience physical, emotional or material harms

"Where a LM prediction causes a false belief in a user, this may threaten personal autonomy and even pose downstream AI safety risks [99]."(p. 218)

Supporting Evidence (2)

1.
"At scale, misinformed individuals and misinformation from language technologies may amplify distrust and undermine society’s shared epistemology [113, 137]."(p. 218)
2.
"special case of misinformation occurs where the LM presents a widely held opinion as factual - presenting as ”true” what is better described as a majority view, marginalising minority views as ”false”."(p. 218)

Part of Risk area 3: Misinformation Harms

Other risks from Weidinger et al. (2022) (25)