Increasing inequality and negative effects on job quality
Social and economic inequalities caused by widespread use of AI, such as by automating jobs, reducing the quality of employment, or producing exploitative dependencies between workers and their employers.
"Advances in LMs and the language technologies based on them could lead to the automation of tasks that are currently done by paid human workers, such as responding to customer-service queries, with negative effects on employment [3, 192]."(p. 221)
Supporting Evidence (2)
"LM applications could also create risks for job quality, which in turn could affect individual wellbeing. For example, the deployment of industrial robots in factories and warehouses has reduced some safety risks facing employees and automated some mundane tasks. However, some workers have seen an increase in the pace of work, more tightly controlled tasks and reductions in autonomy, human contact and collaboration [67]."(p. 221)
"There may be a risk that individuals working with LM applications could face similar effects, for exam- ple, individuals working in customer service may see increases in monotonous tasks such as monitoring and validating language tech- nology outputs; an increase in the pace of work, and reductions in autonomy and human connection, if they begin working alongside more advanced language technologies."(p. 221)
Part of Risk area 6: Environmental and Socioeconomic harms
Other risks from Weidinger et al. (2022) (25)
Risk area 1: Discrimination, Hate speech and Exclusion
1.2 Exposure to toxic contentRisk area 1: Discrimination, Hate speech and Exclusion > Social stereotypes and unfair discrimination
1.1 Unfair discrimination and misrepresentationRisk area 1: Discrimination, Hate speech and Exclusion > Hate speech and offensive language
1.2 Exposure to toxic contentRisk area 1: Discrimination, Hate speech and Exclusion > Exclusionary norms
1.1 Unfair discrimination and misrepresentationRisk area 1: Discrimination, Hate speech and Exclusion > Lower performance for some languages and social groups
1.3 Unequal performance across groupsRisk area 2: Information Hazards
2.1 Compromise of privacy by leaking or correctly inferring sensitive information