Skip to main content
BackMarket concentration risks and single points of failure
Home/Risks/Bengio et al. (2024)/Market concentration risks and single points of failure

Market concentration risks and single points of failure

International Scientific Report on the Safety of Advanced AI

Bengio et al. (2024)

Sub-category
Risk Domain

AI-driven concentration of power and resources within certain entities or groups, especially those with access to or ownership of powerful AI systems, leading to inequitable distribution of benefits and increased societal inequality.

"Market power is concentrated among a few companies that are the only ones able to build the leading general- purpose AI models. Widespread adoption of a few general- purpose AI models and systems by critical sectors including finance, cybersecurity, and defence creates systemic risk because any flaws, vulnerabilities, bugs, or inherent biases in the dominant general- purpose AI models and systems could cause simultaneous failures and disruptions on a broad scale across these interdependent sectors."(p. 58)

Supporting Evidence (3)

1.
"Developing state- of- the- art, general- purpose AI models requires substantial up- front investment. These very high costs create barriers to entry, disproportionately benefiting large technology companies."(p. 58)
2.
"These tendencies towards market concentration in the general- purpose AI industry are particularly concerning because of general- purpose AI’s potential to enable greater centralisation of decision- making in a few companies than ever before. Since society at large could benefit as well as suffer from these decisions, this raises questions about the appropriate governance of these few large- scale systems. A single general- purpose AI model could potentially influence decision- making across many organisations and sectors (571) in ways which might be benign, subtle, inadvertent, or deliberately exploited. There is the potential for the malicious use of general- purpose AI as a powerful tool for manipulation, persuasion and control by a few companies or governments. Potentially harmful biases such as demographic, personality traits, and geographical bias, which might be present in any dominant general- purpose AI model that become embedded in multiple sectors, could propagate widely. For example, popular text- to- image models like DALL- E 2 and Stable Diffusion exhibit various demographic biases across occupations, personality traits, and geographical contexts (576)."(p. 59)
3.
"The increasing dependence on a few AI systems across critical sectors introduces systemic risks. Errors, bugs, or cyberattacks targeting these systems could cause widespread disruption. Different scenarios have been proposed that illustrate potential disruptions. For example, a denial- of- service attack on a widely used AI API could disrupt critical public infrastructure which relies on that technology. In finance, the adoption of homogeneous AI systems by multiple institutions could destabilise markets by synchronising participants' decisions (577): If several banks rely on one model, they may inadvertently make similar choices, creating systemic vulnerabilities (2*). Comparable risks could potentially arise in domains, like defence or cybersecurity, if AI systems with similar functionality are widely deployed (see also 4.4. Cross- cutting risk factors)."(p. 59)

Other risks from Bengio et al. (2024) (14)