Environmental, economical, and societal challenges
"Beyond the risks associated with AI technology and its applications, and the legal challenges arising from its development, it is crucial to consider other long- term issues posed by the deployment of increasingly advanced generative AI models. These risks to society, sometimes referred to as “systemic risks,”537 encompass several key areas: the potential for excessive market concentration, the impacts on employment, environmental consequences, and broader risks to humanity."(p. 103)
Human
Due to a decision or action made by humans
AI system
Due to a decision or action made by an AI system
Other
Due to some other reason or is ambiguous
Not coded
Intentional
Due to an expected outcome from pursuing a goal
Unintentional
Due to an unexpected outcome from pursuing a goal
Other
Without clearly specifying the intentionality
Not coded
Pre-deployment
Occurring before the AI is deployed
Post-deployment
Occurring after the AI model has been trained and deployed
Other
Without a clearly specified time of occurrence
Not coded
Sub-categories (7)
Concentration of market power (Trend toward market concentration)
"In the generative AI market, barriers to entry are very high. Developers need access to vast volumes of data, computational resources, technical expertise, and capital. Large technology companies with such access are able to exploit economies of scale, economies of scope, and feedback effects (learning effects from user- generated data).542 All this gives them an overwhelming advantage over smaller companies, making competition increasingly challenging for these smaller entities."
6.1 Power centralization and unfair distribution of benefitsConcentration of market power (Negative effects of increased market concentration)
"The concentration of AI assets—encompassing data, hardware, and expertise—within a small group of global tech firms raises many concerns.564 Such a situation may stifle healthy competition, impede innovation, and potentially result in elevated costs for accessing AI technologies. Firms with control over essential resources for developing AI models may restrict access to these resources to prevent competition. For instance, if, in the future, training AI models increasingly relies on proprietary data, smaller organizations lacking access to such data might encounter significant barriers to entry and growth."
6.1 Power centralization and unfair distribution of benefitsImpact on labor markets (job loss and displacement)
"Currently, a significant share of workers (three in five) worry about losing their jobs entirely to AI in the next 10 years—particularly those who already work with AI. Some studies conclude that AI tools (generative and non-generative) will create significant job losses.573 The OECD has found that occupations at highest risk of being lost to automation from AI account for about 27% of employment.5"
6.2 Increased inequality and decline in employment qualityImpact on labor markets (rising inequalities)
"AI is more likely to displace workers when it is designed to replicate human skills and intelligence.597 In such cases, there is a risk of concentrating wealth and power in the hands of a few individuals or organizations that control the capital. In addition, ordinary people, including those with significant expertise, may become less valued because machines would be performing their roles. This shift could lower wages, reduce the value of human work, and exacerbate economic inequality."
6.3 Economic and cultural devaluation of human effortEnvironmental cost (energy consumption)
"Training large AI models requires a substantial amount of computing power to handle vast datasets, which translates into high energy consumption."
6.6 Environmental harmEnvironmental cost (water consumption)
"Data centers use water for cooling to prevent servers from overheating. The water consumption associated with AI training and inference processes can be substantial, impacting local water resources."
6.6 Environmental harmArtificial general intelligence (existential risk posed by Artificial General Intelligence)
"In a paper called “How Does Artificial Intelligence Pose an Existential Risk?” published in 2017, Karina Vold and Daniel Harris suggested that humans might create a super-intelligent machine that could outsmart all other intelligences, remain beyond human control, and potentially engage in actions that are contrary to human interests.635 The prevailing narrative surrounding AI existential risk typically lies in the possibility of developing “Artificial General Intelligence” (AGI), or artificial super- intelligence (ASI)."
7.2 AI possessing dangerous capabilitiesOther risks from G'sell (2024) (33)
Technical and operational risks
7.3 Lack of capability or robustnessTechnical and operational risks > Technical vulnerabilities (Robustness - unexpected behaviour)
7.3 Lack of capability or robustnessTechnical and operational risks > Technical vulnerabilities (Robustness - vulnerability to jailbreaking
2.2 AI system security vulnerabilities and attacksTechnical and operational risks > Technical vulnerabilities (The risk of misalignment)
7.1 AI pursuing its own goals in conflict with human goals or valuesTechnical and operational risks > Factually incorrect content (inaccuracies and fabricated sources)
3.1 False or misleading informationTechnical and operational risks > Opacity (the black box problem)
7.4 Lack of transparency or interpretability