AI systems that fail to perform reliably or effectively under varying conditions, exposing them to errors and failures that can have significant consequences, especially in critical applications or areas that require moral reasoning.
"One reason AI systems fail is because they lack the capability or skill needed to do what they are asked to do."(p. 59)
Sub-categories (3)
Lack of capability for task
"As we have seen, this could be due to the skill not being required during the training process (perhaps due to issues with the training data) or because the learnt skill was quite brittle and was not generalisable to a new situation (lack of robustness to distributional shift). In particular, advanced AI assistants may not have the capability to represent complex concepts that are pertinent to their own ethical impact, for example the concept of 'benefitting the user' or 'when the user asks' or representing 'the way in which a user expects to be benefitted'."
7.3 Lack of capability or robustnessDifficult to develop metrics for evaluating benefits or harms caused by AI assistants
"Another difficulty facing AI assistant systems is that it is challenging to develop metrics for evaluating particular aspects of benefits or harms caused by the assistant – especially in a sufficiently expansive sense, which could involve much of society (see Chapter 19). Having these metrics is useful both for assessing the risk of harm from the system and for using the metric as a training signal."
6.5 Governance failureSafe exploration problem with widely deployed AI assistants
"Moreover, we can expect assistants – that are widely deployed and deeply embedded across a range of social contexts – to encounter the safe exploration problem referenced above Amodei et al. (2016). For example, new users may have different requirements that need to be explored, or widespread AI assistants may change the way we live, thus leading to a change in our use cases for them (see Chapters 14 and 15). To learn what to do in these new situations, the assistants may need to take exploratory actions. This could be unsafe, for example a medical AI assistant when encountering a new disease might suggest an exploratory clinical trial that results in long-lasting ill health for participants."
7.3 Lack of capability or robustnessOther risks from Gabriel et al. (2024) (69)
Goal-related failures
7.1 AI pursuing its own goals in conflict with human goals or valuesGoal-related failures > Misaligned consequentialist reasoning
7.3 Lack of capability or robustnessGoal-related failures > Specification gaming
7.1 AI pursuing its own goals in conflict with human goals or valuesGoal-related failures > Goal misgeneralisation
7.1 AI pursuing its own goals in conflict with human goals or valuesGoal-related failures > Deceptive alignment
7.1 AI pursuing its own goals in conflict with human goals or valuesMalicious Uses
4.0 Malicious Actors & Misuse