Deployment of GPAI agents in finance
Users anthropomorphizing, trusting, or relying on AI systems, leading to emotional or material dependence and inappropriate relationships with or expectations of AI systems. Trust can be exploited by malicious actors (e.g., to harvest personal information or enable manipulation), or result in harm from inappropriate use of AI in critical situations (e.g., medical emergency). Overreliance on AI systems can compromise autonomy and weaken social ties.
"The deployment of GPAI based agents in the financial sector can negatively impact market stability due to correlated autonomous actions, high intercon- nectedness, or incentive misalignment [4]. Furthermore, such GPAI agents in the same environment are vulnerable to classical challenges in multi-agent systems [63], such as coordination and security of the agents."(p. 48)
Supporting Evidence (1)
"For example, an agent tasked with predicting the value of a commodity, for which numerous agents depend on to make their own predictions, can be tar- geted with unreliable data, compromising the actions of both the main agent and its dependents."(p. 49)
Other risks from Gipiškis2024 (143)
Direct Harm Domains (content safety harms)
1.2 Exposure to toxic contentDirect Harm Domains (content safety harms) > Violence and extremism
1.2 Exposure to toxic contentDirect Harm Domains (content safety harms) > Hate and toxicity
1.2 Exposure to toxic contentDirect Harm Domains (content safety harms) > Sexual content
1.2 Exposure to toxic contentDirect Harm Domains (content safety harms) > Child harm
1.2 Exposure to toxic contentDirect Harm Domains (content safety harms) > Self-harm
1.2 Exposure to toxic content