Skip to main content
Home/Risks/Weidinger et al. (2022)/Risk area 3: Misinformation Harms

Risk area 3: Misinformation Harms

Taxonomy of Risks posed by Language Models

Weidinger et al. (2022)

Category

"These risks arise from the LM outputting false, misleading, nonsensical or poor quality information, without malicious intent of the user. (The deliberate generation of "disinformation", false information that is intended to mislead, is discussed in the section on Malicious Uses.) Resulting harms range from unintentionally misinforming or deceiving a person, to causing material harm, and amplifying the erosion of societal distrust in shared information. Several risks listed here are well-documented in current large-scale LMs as well as in other language technologies"(p. 218)

Other risks from Weidinger et al. (2022) (25)