The FBI warned of an increase in complaints about malicious actors using deepfake technology and stolen personally identifiable information to fraudulently apply for remote work positions, particularly in IT and programming roles with access to sensitive data.
The FBI Internet Crime Complaint Center (IC3) issued a warning about an increasing trend of fraudulent job applications for remote work positions using deepfake technology and stolen personally identifiable information (PII). The targeted positions primarily include information technology, computer programming, database, and software-related job functions, with some positions providing access to customer PII, financial data, corporate IT databases, and proprietary information. Complaints reported the use of voice spoofing or voice deepfakes during online interviews, where the lip movements and actions of interviewed candidates did not properly coordinate with their audio. Additional indicators included misaligned coughing, sneezing, and other auditory actions that did not match the visual presentation. The FBI also noted that pre-employment background checks revealed that some applicants were using stolen PII belonging to other individuals, with victims reporting that their identities had been used without their knowledge in these fraudulent applications.
Domain classification, causal taxonomy, severity scores, and national security assessments were LLM-classified and may contain errors.
Using AI systems to gain a personal advantage over others such as through cheating, fraud, scams, blackmail or targeted manipulation of beliefs or behavior. Examples include AI-facilitated plagiarism for research or education, impersonating a trusted or fake individual for illegitimate financial benefit, or creating humiliating or sexual imagery.
Human
Due to a decision or action made by humans
Intentional
Due to an expected outcome from pursuing a goal
Post-deployment
Occurring after the AI model has been trained and deployed