Skip to main content
Home/Risks/Bengio et al. (2024)/Risks from Malfunctions

Risks from Malfunctions

International Scientific Report on the Safety of Advanced AI

Bengio et al. (2024)

Category

None provided.

Sub-categories (3)

Risks from product functionality issues

"Product functionality issues occur when there is confusion or misinformation about what a general- purpose AI model or system is capable of. This can lead to unrealistic expectations and overreliance on general- purpose AI systems, potentially causing harm if a system fails to deliver on expected capabilities. These functionality misconceptions may arise from technical difficulties in assessing an AI model's true capabilities on its own,or predicting its performance when part of a larger system. Misleading claims in advertising and communications can also contribute to these misconceptions."

5.1 Overreliance and unsafe use
OtherUnintentionalOther

Risks from bias and underrepresentation

"The outputs and impacts of general- purpose AI systems can be biased with respect to various aspects of human identity, including race, gender, culture, age, and disability. This creates risks in high- stakes domains such as healthcare, job recruitment, and financial lending. General- purpose AI systems are primarily trained on language and image datasets that disproportionately represent English- speaking and Western cultures, increasing the potential for harm to individuals not represented well by this data."

1.1 Unfair discrimination and misrepresentation
AI systemUnintentionalPost-deployment

Loss of control

"'Loss of control’ scenarios are potential future scenarios in which society can no longer meaningfully constrain some advanced general- purpose AI agents, even if it becomes clear they are causing harm. These scenarios are hypothesised to arise through a combination of social and technical factors, such as pressures to delegate decisions to general- purpose AI systems, and limitations of existing techniques used to influence the behaviours of general- purpose AI systems."

7.1 AI pursuing its own goals in conflict with human goals or values
OtherOtherPost-deployment

Other risks from Bengio et al. (2024) (14)