Skip to main content
Home/Risks/G'sell (2024)/Opacity (the black box problem)

Opacity (the black box problem)

Regulating under Uncertainty: Governance Options for Generative AI

G'sell (2024)

Sub-category
Risk Domain

Challenges in understanding or explaining the decision-making processes of AI systems, which can lead to mistrust, difficulty in enforcing compliance standards or holding relevant actors accountable for harms, and the inability to identify and correct errors.

"Opacity surrounding the technical, internal decision-making processes of generative AI models is popularly known as the “black box problem.”277 Generative AI models, most ubiquitously built on deep neural networks with hundreds of billions of internal connections,278 have become so complex that their internal decision-making processes are no longer traceable or interpretable to even the most advanced expert observers. This means that, while the inputs and outputs of a system can be observed, developers cannot explain in detail why specific inputs correspond to specific outputs."(p. 69)

Part of Technical and operational risks

Other risks from G'sell (2024) (33)