Skip to main content
BackSafe exploration problem with widely deployed AI assistants
Home/Risks/Gabriel et al. (2024)/Safe exploration problem with widely deployed AI assistants

Safe exploration problem with widely deployed AI assistants

The Ethics of Advanced AI Assistants

Gabriel et al. (2024)

Sub-category
Risk Domain

AI systems that fail to perform reliably or effectively under varying conditions, exposing them to errors and failures that can have significant consequences, especially in critical applications or areas that require moral reasoning.

"Moreover, we can expect assistants – that are widely deployed and deeply embedded across a range of social contexts – to encounter the safe exploration problem referenced above Amodei et al. (2016). For example, new users may have different requirements that need to be explored, or widespread AI assistants may change the way we live, thus leading to a change in our use cases for them (see Chapters 14 and 15). To learn what to do in these new situations, the assistants may need to take exploratory actions. This could be unsafe, for example a medical AI assistant when encountering a new disease might suggest an exploratory clinical trial that results in long-lasting ill health for participants."(p. 59)

Part of Capability failures

Other risks from Gabriel et al. (2024) (69)