Skip to main content
BackLack of transparency and interpretability
Home/Risks/Government Office for Science (2023)/Lack of transparency and interpretability

Lack of transparency and interpretability

Future Risks of Frontier AI

Government Office for Science (2023)

Category
Risk Domain

Challenges in understanding or explaining the decision-making processes of AI systems, which can lead to mistrust, difficulty in enforcing compliance standards or holding relevant actors accountable for harms, and the inability to identify and correct errors.

"Today's Frontier AI is difficult to interpret and lacks transparency. Contextual understanding of the training data is not explicitly embedded within these models. They can fail to capture perspectives of underrepresented groups or the limitations within which they are expected to perform without fine tuning or reinforcement learning with human feedback (RLHF)."(p. 23)

Other risks from Government Office for Science (2023) (19)