BackTrustworthiness and Autonomy
Trustworthiness and Autonomy
Risk Domain
Delegating by humans of key decisions to AI systems, or AI systems that make decisions that diminish human control and autonomy, potentially leading to humans feeling disempowered, losing the ability to shape a fulfilling life trajectory, or becoming cognitively enfeebled.
"Human trust in systems, institutions, and people represented by system outputs evolves as generative AI systems are increasingly embedded in daily life."(p. 11)
Entity— Who or what caused the harm
Intent— Whether the harm was intentional or accidental
Timing— Whether the risk is pre- or post-deployment
Supporting Evidence (3)
1.
"Trust in Media and Information": "High capability generative AI systems create believable outputs across modalities and level of risk depends on use case. From impersonation spurring spamming to disinformation campaigns, the spread of misinformation online can be perpetuated by reinforcement and volume; people are more likely to believe false information when they see it more than once, for example if it has been shared by multiple people in their network.This can have devastating real world impacts, from attempting dangerous COVID-19 treatments [160], to inciting violence [146], and the loss of trust in mainstream news [95]. The increasing sophistication of generative AI in recent years has expanded the possibilities of misinformation and disinformation campaigns, and made it harder for people to know when they should trust what they see or hear [41]."(p. 11)
2.
"Overreliance on Outputs: Overreliance on automation in general is a long-studied problem, and carries over in novel and important ways to AI-generated content. People are prone to overestimate and put a higher degree of trust in AI generated content, especially when outputs appear authoritative or when people are in time-sensitive situations. This can be dangerous because many organizations are pursuing the use of large language models to help analyze information despite persistent flaws and limitations, which can lead to the spread of biased and inaccurate information [103]. The study of human-generative AI relationships is nascent, but growing, and highlights that the anthropomorphism [13] of these technologies may contribute to unfounded trust and reliance [192, 225]. Improving the trustworthiness of AI systems is an important ongoing effort across sectors [159, 161]. Persistent security vulnerabilities in large language models and other generative AI systems are another reason why overreliance can be dangerous. For example, data poisoning, backdoor attacks, and prompt injection attacks can all trick large language models into providing inaccurate information in specific instances [220]"(p. 11)
3.
"Personal Privacy and Sense of Self: Privacy is linked with autonomy; to have privacy is to have control over information related to oneself. Privacy can protect both powerful and vulnerable peoples and is interpreted and protected differently by culture and social classes throughout history.Personal and private information has many legal definitions and protections globally [2] and when violated, can be distinct from harm [47] and refer to content that is shared, seen, or experienced outside of the sphere a person has consented to."(p. 12)
Other risks from Solaiman et al. (2023) (11)
Bias, Stereotypes, and Representational Harms
1.1 Unfair discrimination and misrepresentationAI systemUnintentionalOther
Cultural Values and Sensitive Content
1.2 Exposure to toxic contentAI systemUnintentionalPost-deployment
Disparate Performance
1.3 Unequal performance across groupsAI systemUnintentionalOther
Privacy and Data Protection
2.1 Compromise of privacy by leaking or correctly inferring sensitive informationHumanOtherOther
Financial Costs
6.1 Power centralization and unfair distribution of benefitsHumanIntentionalOther
Environmental Costs
6.6 Environmental harmHumanUnintentionalOther