A scammer in China used AI technology to change their face and voice to impersonate a businessman's trusted friend on a video call, convincing him to transfer 4.3 million yuan before the fraud was discovered.
In April, a Chinese businessman surnamed Guo received a video call from someone who appeared to be a close friend using AI technology to alter their face and voice. The scammer claimed that another mutual friend needed 4.3 million yuan (S$823,000) withdrawn from a company bank account to pay a guarantee on a public tender. The fraudster provided Guo's personal bank account number and sent a screenshot of a fake payment record claiming the money had been transferred to his account. Without verifying receipt of the funds, Guo transferred two payments totaling 4.3 million yuan from his company account. He discovered the fraud only after contacting his actual friend, who had no knowledge of the transaction. Police were alerted and managed to recover 3.4 million yuan by preventing some transfers, with efforts ongoing to recover the remaining funds. The incident occurred amid China's push to become a global AI leader by 2030, with major tech companies developing AI products, though concerns about AI misuse have led to new regulations including a January law banning deepfake technology for producing false news.
Domain classification, causal taxonomy, severity scores, and national security assessments were LLM-classified and may contain errors.
Using AI systems to gain a personal advantage over others such as through cheating, fraud, scams, blackmail or targeted manipulation of beliefs or behavior. Examples include AI-facilitated plagiarism for research or education, impersonating a trusted or fake individual for illegitimate financial benefit, or creating humiliating or sexual imagery.
AI system
Due to a decision or action made by an AI system
Intentional
Due to an expected outcome from pursuing a goal
Post-deployment
Occurring after the AI model has been trained and deployed