Economic Power Centralisation and Inequality
Governing General Purpose AI: A Comprehensive Map of Unreliability, Misuse and Systemic Risks
Maham & Küspert (2023)
AI-driven concentration of power and resources within certain entities or groups, especially those with access to or ownership of powerful AI systems, leading to inequitable distribution of benefits and increased societal inequality.
"Increasingly advanced general purpose AI models pose the risk of a concentration of economic power and exacerbation of existing inequalities through disparities in effective access to these models. This can materialise on multiple levels, between developers of general purpose AI models and companies building applications on them, between individuals and between countries on a global scale."(p. 35)
Supporting Evidence (5)
"General purpose AI could worsen wealth and income inequality as it is expected to result in financial benefits mostly concentrated amongst the few developers of this technology and the many providers of downstream applications building on these models.187"(p. 35)
"If these models are increasingly able to substitute for workers across different skill levels, this could shift income away from labour towards owners and developers of the models and their applications.189 If general purpose AI models lead to a displacement of workers, this could further worsen income inequality, though the scale of this potential job displacement is debated among experts.190"(p. 35)
"The small number of companies with enough resources to build general purpose AI models retains a certain level of control over how their models are re-used and distributed, and thus economic power in influencing who can access their technology.191 Training general purpose AI models requires increasingly large amounts of computational resources (see Figure 3). Many of the value-generating applications are built upon a few general purpose AI models which are being developed by a small number of well-resourced companies with a significant first- mover advantage, namely Meta, Microsoft and its partner OpenAI, and Alphabet with its Google DeepMind team and investee Anthropic, as outlined in What are general purpose AI models?. To build applications on these models, downstream developers require direct or indirect access to the model, resulting in dependencies"(p. 35)
"Releasing models via API, either with or without options to modify the model, or open-source, determines the level of control developers of general purpose AI models keep. This includes granting access to business customers or individual users, monitoring downstream (mis)use and monetising the models after releasing them.192 Some dependencies exist even for open-source models since the initial developers retain a certain level of control about what information, such as training data and process, they share and additional services they offer."(p. 36)
"Further, to effectively commercialise these applications, computing power is needed to continuously run them, which is often offered in partnership with cloud service providers, an already concentrated market led by Amazon’s AWS, Alphabet’s Google Cloud, and Microsoft’s Azure. 194 Further barriers include access to high-quality datasets, data storage, and access to low-latency and high-bandwidth internet."(p. 36)
Part of Systemic Risks
Other risks from Maham & Küspert (2023) (10)
Misuse Risks
4.0 Malicious Actors & MisuseMisuse Risks > Cybercrime
4.3 Fraud, scams, and targeted manipulationMisuse Risks > Biosecurity Threats
4.2 Cyberattacks, weapon development or use, and mass harmMisuse Risks > Politically motivated misuse
4.1 Disinformation, surveillance, and influence at scaleSystemic Risks
6.1 Power centralization and unfair distribution of benefitsSystemic Risks > Ideological Homogenization from Value Embedding
1.3 Unequal performance across groups