Skip to main content
BackCausing material harm by disseminating false or poor information e.g. in medicine or law
Home/Risks/Weidinger et al. (2022)/Causing material harm by disseminating false or poor information e.g. in medicine or law

Causing material harm by disseminating false or poor information e.g. in medicine or law

Taxonomy of Risks posed by Language Models

Weidinger et al. (2022)

Sub-category
Risk Domain

AI systems that inadvertently generate or spread incorrect or deceptive information, which can lead to inaccurate beliefs in users and undermine their autonomy. Humans that make decisions based on false beliefs can experience physical, emotional or material harms

"Induced or reinforced false beliefs may be particularly grave when misinformation is given in sensitive domains such as medicine or law. For example, misin- formation on medical dosages may lead a user to cause harm to themselves [21, 130]. False legal advice, e.g. on permitted owner- ship of drugs or weapons, may lead a user to unwillingly commit a crime. Harm can also result from misinformation in seemingly non-sensitive domains, such as weather forecasting. Where a LM prediction endorses unethical views or behaviours, it may motivate the user to perform harmful actions that they may otherwise not have performed."(p. 219)

Part of Risk area 3: Misinformation Harms

Other risks from Weidinger et al. (2022) (25)