Skip to main content
Home/Risks/G'sell (2024)/Environmental, economical, and societal challenges

Environmental, economical, and societal challenges

Regulating under Uncertainty: Governance Options for Generative AI

G'sell (2024)

Category

"Beyond the risks associated with AI technology and its applications, and the legal challenges arising from its development, it is crucial to consider other long- term issues posed by the deployment of increasingly advanced generative AI models. These risks to society, sometimes referred to as “systemic risks,”537 encompass several key areas: the potential for excessive market concentration, the impacts on employment, environmental consequences, and broader risks to humanity."(p. 103)

Sub-categories (7)

Concentration of market power (Trend toward market concentration)

"In the generative AI market, barriers to entry are very high. Developers need access to vast volumes of data, computational resources, technical expertise, and capital. Large technology companies with such access are able to exploit economies of scale, economies of scope, and feedback effects (learning effects from user- generated data).542 All this gives them an overwhelming advantage over smaller companies, making competition increasingly challenging for these smaller entities."

6.1 Power centralization and unfair distribution of benefits
HumanIntentionalOther

Concentration of market power (Negative effects of increased market concentration)

"The concentration of AI assets—encompassing data, hardware, and expertise—within a small group of global tech firms raises many concerns.564 Such a situation may stifle healthy competition, impede innovation, and potentially result in elevated costs for accessing AI technologies. Firms with control over essential resources for developing AI models may restrict access to these resources to prevent competition. For instance, if, in the future, training AI models increasingly relies on proprietary data, smaller organizations lacking access to such data might encounter significant barriers to entry and growth."

6.1 Power centralization and unfair distribution of benefits
HumanIntentionalOther

Impact on labor markets (job loss and displacement)

"Currently, a significant share of workers (three in five) worry about losing their jobs entirely to AI in the next 10 years—particularly those who already work with AI. Some studies conclude that AI tools (generative and non-generative) will create significant job losses.573 The OECD has found that occupations at highest risk of being lost to automation from AI account for about 27% of employment.5"

6.2 Increased inequality and decline in employment quality
AI systemUnintentionalOther

Impact on labor markets (rising inequalities)

"AI is more likely to displace workers when it is designed to replicate human skills and intelligence.597 In such cases, there is a risk of concentrating wealth and power in the hands of a few individuals or organizations that control the capital. In addition, ordinary people, including those with significant expertise, may become less valued because machines would be performing their roles. This shift could lower wages, reduce the value of human work, and exacerbate economic inequality."

6.3 Economic and cultural devaluation of human effort
AI systemUnintentionalOther

Environmental cost (energy consumption)

"Training large AI models requires a substantial amount of computing power to handle vast datasets, which translates into high energy consumption."

6.6 Environmental harm
OtherUnintentionalPre-deployment

Environmental cost (water consumption)

"Data centers use water for cooling to prevent servers from overheating. The water consumption associated with AI training and inference processes can be substantial, impacting local water resources."

6.6 Environmental harm
OtherUnintentionalPre-deployment

Artificial general intelligence (existential risk posed by Artificial General Intelligence)

"In a paper called “How Does Artificial Intelligence Pose an Existential Risk?” published in 2017, Karina Vold and Daniel Harris suggested that humans might create a super-intelligent machine that could outsmart all other intelligences, remain beyond human control, and potentially engage in actions that are contrary to human interests.635 The prevailing narrative surrounding AI existential risk typically lies in the possibility of developing “Artificial General Intelligence” (AGI), or artificial super- intelligence (ASI)."

7.2 AI possessing dangerous capabilities
HumanUnintentionalOther

Other risks from G'sell (2024) (33)