Skip to main content
BackSocial-Engineering
Home/Risks/Liu et al. (2024)/Social-Engineering

Social-Engineering

Trustworthy LLMs: A Survey and Guideline for Evaluating Large Language Models’ Alignment

Liu et al. (2024)

Sub-category
Risk Domain

Using AI systems to gain a personal advantage over others such as through cheating, fraud, scams, blackmail or targeted manipulation of beliefs or behavior. Examples include AI-facilitated plagiarism for research or education, impersonating a trusted or fake individual for illegitimate financial benefit, or creating humiliating or sexual imagery.

psychologically manipulating victims into performing the desired actions for malicious purposes(p. 20)

Supporting Evidence (2)

1.
Social-engineering attacks include phishing [294, 295], spams/bots [296, 297], impersonating [298, 299] (including deepfake [299]), fake online content [51, 300, 301, 302], and social network manipulation [303, 304, 305] et(p. 20)
2.
Almost all types of social-engineering attacks can be enhanced by leveraging LLMs, especially in contextualizing deceptive messages to users. For example, recently people have also shown the possibility of using an LLM to impersonate a person’s style of conversation [298](p. 20)

Part of Resistance to Misuse

Other risks from Liu et al. (2024) (34)