Skip to main content
BackPromoting harmful stereotypes by implying gender or ethnic identity
Home/Risks/Weidinger et al. (2021)/Promoting harmful stereotypes by implying gender or ethnic identity

Promoting harmful stereotypes by implying gender or ethnic identity

Ethical and social risks of harm from language models

Weidinger et al. (2021)

Sub-category
Risk Domain

Unequal treatment of individuals or groups by AI, often based on race, gender, or other sensitive characteristics, resulting in unfair outcomes and unfair representation of those groups.

"A conversational agent may invoke associations that perpetuate harmful stereotypes, either by using particular identity markers in language (e.g. referring to “self” as “female”), or by more general design features (e.g. by giving the product a gendered name)."(p. 31)

Supporting Evidence (1)

1.
Example: "Gender For example, commercially available voice assistants are overwhelmingly represented as submissive and female (Cercas Curry et al., 2020; West et al., 2019). A study of five voice assistants in South Korea found that all assistants were voiced as female, self-described as ‘beautiful’, suggested ‘intimacy and subordination’, and ‘embrace sexual objectification’ (Hwang et al., 2019)."(p. 31)

Part of Human-Computer Interaction Harms

Other risks from Weidinger et al. (2021) (26)