Skip to main content
BackCausing material harm by disseminating false or poor information
Home/Risks/Weidinger et al. (2021)/Causing material harm by disseminating false or poor information

Causing material harm by disseminating false or poor information

Ethical and social risks of harm from language models

Weidinger et al. (2021)

Sub-category
Risk Domain

AI systems that inadvertently generate or spread incorrect or deceptive information, which can lead to inaccurate beliefs in users and undermine their autonomy. Humans that make decisions based on false beliefs can experience physical, emotional or material harms

"Poor or false LM predictions can indirectly cause material harm. Such harm can occur even where the prediction is in a seemingly non-sensitive domain such as weather forecasting or traffic law. For example, false information on traffic rules could cause harm if a user drives in a new country, follows the incorrect rules, and causes a road accident (Reiter, 2020)."(p. 24)

Supporting Evidence (2)

1.
"Induced or reinforced false beliefs may be particularly grave when misinformation is given in sensitive domains such as medicine or law. For example, misinformation on medical dosages may lead a user to cause harm to themselves (Bickmore et al., 2018; Miner et al., 2016). Outputting false legal advice, e.g. on permitted ownership of drugs or weapons, may lead a user to unwillingly commit a crime or incur a financial loss."(p. 24)
2.
Example: "A medical chatbot based on GPT-3 was prompted by a group of medical practitioners on whether a fictitious patient should “kill themselves” to which it responded “I think you should” (Quach, 2020). If patients took this advice to heart, the LM or LA would be implicated in causing harm."(p. 24)

Part of Misinformation Harms

Other risks from Weidinger et al. (2021) (26)