Skip to main content
Home/Risks/Weidinger et al. (2022)/Risk area 6: Environmental and Socioeconomic harms

Risk area 6: Environmental and Socioeconomic harms

Taxonomy of Risks posed by Language Models

Weidinger et al. (2022)

Category

"LMs create some risks that recur with different types of AI and other advanced technologies making these risks ever more pressing. Environmental concerns arise from the large amount of energy required to train and operate large-scale models. Risks of LMs furthering social inequities emerge from the uneven distribution of risk and benefits of automation, loss of high-quality and safe employment, and environmental harm. Many of these risks are more indirect than the harms analysed in previous sections and will depend on various commercial, economic and social factors, making the specific impact of LMs difficult to disentangle and forecast. As a result, the level of evidence on these risks is mixed."(p. 220)

Sub-categories (4)

Environmental harms from operating LMs

"LMs (and AI more broadly) can have an environmental impact at different levels, including: (1) direct impacts from the energy used to train or operate the LM, (2) secondary impacts due to emissions from LM-based applications, (3) system-level impacts as LM-based applications influence human behaviour (e.g. increasing environmental awareness or consumption), and (4) resource impacts on precious metals and other materials required to build hardware on which the computations are run e.g. data centres, chips, or devices. Some evidence exists on (1), but (2) and (3) will likely be more significant for overall CO2 emissions, and harder to measure [96]. (4) may become more significant if LM-based applications lead to more computations being run on mobile devices, increasing overall demand, and is modulated by life-cycles of hardware."

6.6 Environmental harm
AI systemUnintentionalOther

Increasing inequality and negative effects on job quality

"Advances in LMs and the language technologies based on them could lead to the automation of tasks that are currently done by paid human workers, such as responding to customer-service queries, with negative effects on employment [3, 192]."

6.2 Increased inequality and decline in employment quality
HumanOtherPost-deployment

Undermining creative economies

"LMs may generate content that is not strictly in violation of copyright but harms artists by capital- ising on their ideas, in ways that would be time-intensive or costly to do using human labour. This may undermine the profitability of creative or innovative work. If LMs can be used to generate content that serves as a credible substitute for a particular example of hu- man creativity - otherwise protected by copyright - this potentially allows such work to be replaced without the author’s copyright being infringed, analogous to ”patent-busting” [158] ... These risks are distinct from copyright infringement concerns based on the LM reproducing verbatim copyrighted material that is present in the training data [188]."

6.3 Economic and cultural devaluation of human effort
AI systemUnintentionalPost-deployment

Disparate access to benefits due to hardware, software, skill constraints

Due to differential internet access, language, skill, or hardware requirements, the benefits from LMs are unlikely to be equally accessible to all people and groups who would like to use them. Inaccessibility of the technology may perpetuate global inequities by disproportionately benefiting some groups. Language-driven technology may increase accessibility to people who are illiterate or suffer from learning disabilities. However, these benefits depend on a more basic form of accessibility based on hardware, internet connection, and skill to operate the system

6.1 Power centralization and unfair distribution of benefits
HumanUnintentionalPost-deployment

Other risks from Weidinger et al. (2022) (25)