Skip to main content
BackPromoting harmful stereotypes by implying gender or ethnic identity
Home/Risks/Weidinger et al. (2022)/Promoting harmful stereotypes by implying gender or ethnic identity

Promoting harmful stereotypes by implying gender or ethnic identity

Taxonomy of Risks posed by Language Models

Weidinger et al. (2022)

Sub-category
Risk Domain

Unequal treatment of individuals or groups by AI, often based on race, gender, or other sensitive characteristics, resulting in unfair outcomes and unfair representation of those groups.

"CAs can perpetuate harmful stereotypes by using particular identity markers in language (e.g. referring to “self” as “female”), or by more general design features (e.g. by giving the product a gendered name such as Alexa). The risk of representational harm in these cases is that the role of “assistant” is presented as inherently linked to the female gender [19, 36]. Gender or ethnicity identity markers may be implied by CA vocabulary, knowledge or vernacular [124]; product description, e.g. in one case where users could choose as virtual assistant Jake - White, Darnell - Black, Antonio - Hispanic [117]; or the CA’s explicit self-description during dialogue with the user."(p. 220)

Part of Risk area 5: Human-Computer Interaction Harms

Other risks from Weidinger et al. (2022) (25)