Opacity (industry opacity)
AI developers or state-like actors competing in an AI ‘race’ by rapidly developing, deploying, and applying AI systems to maximize strategic or economic advantage, increasing the risk they release unsafe and error-prone systems.
"Opacity is not solely due to the technological complexity that limits developers’ and users’ understanding of how generative models function on a technical level. It is further exacerbated by the practices of organizations and companies that are advancing the field. Many are private companies that choose to withhold from the public many of the precise characteristics of their most advanced models."(p. 69)
Supporting Evidence (1)
"AI developers cite both near-term competition and security risks to justify withholding many vital details of their models from the general public."(p. 69)
Part of Technical and operational risks
Other risks from G'sell (2024) (33)
Technical and operational risks
7.3 Lack of capability or robustnessTechnical and operational risks > Technical vulnerabilities (Robustness - unexpected behaviour)
7.3 Lack of capability or robustnessTechnical and operational risks > Technical vulnerabilities (Robustness - vulnerability to jailbreaking
2.2 AI system security vulnerabilities and attacksTechnical and operational risks > Technical vulnerabilities (The risk of misalignment)
7.1 AI pursuing its own goals in conflict with human goals or valuesTechnical and operational risks > Factually incorrect content (inaccuracies and fabricated sources)
3.1 False or misleading informationTechnical and operational risks > Opacity (the black box problem)
7.4 Lack of transparency or interpretability