Scammers used AI deepfake voice technology to impersonate Ferrari CEO Benedetto Vigna in an attempted fraud against a Ferrari executive, but the scam was thwarted when the executive asked about a book recommendation that only the real CEO would know.
An unnamed Ferrari executive received WhatsApp messages from someone claiming to be CEO Benedetto Vigna about a confidential deal requiring secrecy and a currency-hedge transaction. The scammer used a different phone number claiming confidentiality was needed. A follow-up phone call featured a convincing AI-generated deepfake voice that accurately mimicked Vigna's southern Italian accent and discussed China-related deal complications. However, the executive became suspicious due to slight mechanical intonations in the voice and asked the caller to identify the title of a book Vigna had recently recommended. When the scammer could not answer this personal question, the fraud attempt was exposed and prevented. The incident demonstrates how readily available voice cloning technology can create convincing deepfakes for social engineering attacks, but also shows how personal knowledge can serve as an effective authentication method against such scams.
Domain classification, causal taxonomy, severity scores, and national security assessments were LLM-classified and may contain errors.
Using AI systems to gain a personal advantage over others such as through cheating, fraud, scams, blackmail or targeted manipulation of beliefs or behavior. Examples include AI-facilitated plagiarism for research or education, impersonating a trusted or fake individual for illegitimate financial benefit, or creating humiliating or sexual imagery.
Human
Due to a decision or action made by humans
Intentional
Due to an expected outcome from pursuing a goal
Post-deployment
Occurring after the AI model has been trained and deployed