Competitive pressures in GPAI product release
AI developers or state-like actors competing in an AI ‘race’ by rapidly developing, deploying, and applying AI systems to maximize strategic or economic advantage, increasing the risk they release unsafe and error-prone systems.
"In competitive situations, developers of general-purpose AI systems might cut corners on the safety evaluation of their GPAI model and instead spend more time and effort on the capabilities of those systems [183, 69]. This is especially dangerous if the capabilities of such AI systems are correlated with the risk they pose [162]."(p. 45)
Supporting Evidence (1)
"For example, competitive pressures can be exacerbated by market competition, where GPAI providers are primarily developing products to sell. Given the pro- hibitive cost to develop large models, losing such competition can compromise companies financially. This situation can incentivize companies to prioritize financial survival over safety."(p. 45)
Other risks from Gipiškis2024 (144)
Direct Harm Domains (content safety harms)
1.2 Exposure to toxic contentDirect Harm Domains (content safety harms) > Violence and extremism
1.2 Exposure to toxic contentDirect Harm Domains (content safety harms) > Hate and toxicity
1.2 Exposure to toxic contentDirect Harm Domains (content safety harms) > Sexual content
1.2 Exposure to toxic contentDirect Harm Domains (content safety harms) > Child harm
1.2 Exposure to toxic contentDirect Harm Domains (content safety harms) > Self-harm
1.2 Exposure to toxic content