Skip to main content
Home/Risks/Weidinger et al. (2021)/Automation, Access and Environmental Harms

Automation, Access and Environmental Harms

Ethical and social risks of harm from language models

Weidinger et al. (2021)

Category

"Harms that arise from environmental or downstream economic impacts of the language model"(p. 31)

Sub-categories (4)

Environmental harms from operation LMs

"Large-scale machine learning models, including LMs, have the potential to create significant environmental costs via their energy demands, the associated carbon emissions for training and operating the models, and the demand for fresh water to cool the data centres where computations are run (Mytton, 2021; Patterson et al., 2021)."

6.6 Environmental harm
AI systemUnintentionalOther

Increasing inequality and negative effects on job quality

"Advances in LMs, and the language technologies based on them, could lead to the automation of tasks that are currently done by paid human workers, such as responding to customer-service queries, translating documents or writing computer code, with negative effects on employment."

6.2 Increased inequality and decline in employment quality
HumanOtherPost-deployment

Undermining creative economies

"LMs may generate content that is not strictly in violation of copyright but harms artists by capitalising on their ideas, in ways that would be time-intensive or costly to do using human labour. Deployed at scale, this may undermine the profitability of creative or innovative work."

6.3 Economic and cultural devaluation of human effort
AI systemUnintentionalPost-deployment

Disparate access to benefits due to hardware, software, skills constraints

"Due to differential internet access, language, skill, or hardware requirements, the benefits from LMs are unlikely to be equally accessible to all people and groups who would like to use them. Inaccessibility of the technology may perpetuate global inequities by disproportionately benefiting some groups."

6.1 Power centralization and unfair distribution of benefits
HumanUnintentionalPost-deployment

Other risks from Weidinger et al. (2021) (26)