Skip to main content

Global AI Divide

International Scientific Report on the Safety of Advanced AI

Bengio et al. (2024)

Sub-category
Risk Domain

AI-driven concentration of power and resources within certain entities or groups, especially those with access to or ownership of powerful AI systems, leading to inequitable distribution of benefits and increased societal inequality.

"General- purpose AI research and development is currently concentrated in a few Western countries and China. This ‘AI Divide’ is multicausal, but in part related to limited access to computing power in low- income countries. Access to large and expensive quantities of computing power has become a prerequisite for developing advanced general- purpose AI. This has led to a growing dominance of large technology companies in general- purpose AI development. The AI R&D divide often overlaps with existing global socioeconomic disparities, potentially exacerbating them."(p. 57)

Supporting Evidence (3)

1.
"There is a well- documented concentration of AI research and development, including research on potential societal impacts of AI, in Western countries and China (316, 548, 549). This global ‘AI Divide’ could become even larger for general- purpose AI specifically because of the high costs associated with general- purpose AI development. Some countries face substantial barriers to benefiting from general- purpose AI development and deployment, including lower digital skills literacy, limited access to computing resources, infrastructure challenges, and economic dependence on entities in higher- income countries (519, 550). Because general- purpose AI system development is so dominated by a few companies, particularly those based in the US, there are concerns that prominent general- purpose AI systems which are used worldwide primarily reflect the values, cultures and goals of large Western corporations. In addition, the recent trend towards aiming to develop ever- larger, more powerful general- purpose AI models could also exacerbate global supply chain inequalities (551), place demands on energy usage, and lead to harmful climate effects which also worsen global inequalities (552, 553). The global general- purpose AI divide could also be harmful if biased or inequitable general- purpose AI systems are deployed globally."(p. 57)
2.
"Disparities in the concentration of skilled talent and the steep financial costs of developing and sustaining general- purpose AI systems could align the AI divide with existing global socioeconomic disparities. The United States has the largest percentage of elite AI researchers, contains a majority of the institutions who conduct top- tier research, and is the top destination for AI talent globally (554). However, countries leading in AI development also experience issues with the distribution of skilled AI talent, which is rapidly shifting towards industry. For example, 70 percent of graduates of North American universities with AI PhDs end up getting a job in private industry compared with 21% of graduates two decades ago (555). In April 2023, OpenAI's AI systems were reportedly estimated to incur $700k/day in inference costs (77), a cost that is widely inaccessible for the vast majority of academic institutions and companies and even more so for those based in the Global South (556, 557). Low- resource regions also experience challenges with access to data given the high costs of collection, labelling, and storage. The lower availability of skilled talent to leverage these datasets for model development purposes could further contribute to the AI divide. Infrastructure concerns are a major factor that prohibit equitable access to the resources needed to train and implement general- purpose AI due to issues such as inadequate access to broadband internet (558, 559), power blackouts and insufficient access to electricity (560, 561)."(p. 57)
3.
"The delegation of lower- level AI work to workers in low- income countries has led to a 'ghost work' industry. From content moderation to proofreading to data labelling, a lot of human labour that the typical consumer is usually not aware of – sometimes referred to as ‘ghost work’ – is necessary for many products of large technology companies (565). The increasing demand for data to train general- purpose AI systems, including human feedback to aid in training, has further increased the reliance on ghost work including the creation of firms helping big technology companies to outsource various aspects of data production, including data collection, cleaning, and annotation."(p. 58)

Other risks from Bengio et al. (2024) (14)