A hacker used AI-generated deepfake voice technology to impersonate a Retool employee during a social engineering attack, successfully obtaining multi-factor authentication codes and breaching the company's systems, ultimately compromising 27 cloud customers.
In an incident that occurred last month, a hacker targeted Retool, an IT company that helps clients build business software. The attack began with SMS phishing messages sent to multiple Retool employees, claiming to be from IT staff regarding a payroll issue affecting healthcare coverage. One employee clicked the malicious URL and entered credentials into a fake portal with multi-factor authentication. The attacker then called the employee using AI-powered deepfake technology to replicate another employee's voice. The deepfaked voice demonstrated familiarity with office layout, coworkers, and internal processes, which initially convinced the target despite growing suspicion. The employee ultimately provided one additional MFA code, allowing the attacker to add their device to the account and access the victim's GSuite account. Google Authenticator's cloud syncing feature enabled access to all MFA tokens, allowing the hacker to penetrate Retool's internal systems. The breach ultimately affected 27 cloud customers before Retool revoked the attacker's access and disclosed the incident publicly.
Domain classification, causal taxonomy, severity scores, and national security assessments were LLM-classified and may contain errors.
Using AI systems to gain a personal advantage over others such as through cheating, fraud, scams, blackmail or targeted manipulation of beliefs or behavior. Examples include AI-facilitated plagiarism for research or education, impersonating a trusted or fake individual for illegitimate financial benefit, or creating humiliating or sexual imagery.
Human
Due to a decision or action made by humans
Intentional
Due to an expected outcome from pursuing a goal
Post-deployment
Occurring after the AI model has been trained and deployed