Skip to main content
BackDifficult to develop metrics for evaluating benefits or harms caused by AI assistants
Home/Risks/Gabriel et al. (2024)/Difficult to develop metrics for evaluating benefits or harms caused by AI assistants

Difficult to develop metrics for evaluating benefits or harms caused by AI assistants

The Ethics of Advanced AI Assistants

Gabriel et al. (2024)

Sub-category
Risk Domain

Inadequate regulatory frameworks and oversight mechanisms that fail to keep pace with AI development, leading to ineffective governance and the inability to manage AI risks appropriately.

"Another difficulty facing AI assistant systems is that it is challenging to develop metrics for evaluating particular aspects of benefits or harms caused by the assistant – especially in a sufficiently expansive sense, which could involve much of society (see Chapter 19). Having these metrics is useful both for assessing the risk of harm from the system and for using the metric as a training signal."(p. 59)

Part of Capability failures

Other risks from Gabriel et al. (2024) (69)