A suspected deepfake job applicant used AI-generated video to impersonate a real IT executive during a remote hiring interview at a Japanese company, with the incident linked to broader North Korean schemes to secure overseas employment.
In early March 2024, a Japanese IT company conducted an online hiring interview with a male applicant who identified himself as 'Kefumi Yoshitake' and claimed to have been raised in the United States. The applicant requested fully remote work and ended the interview after about two minutes when told in-person attendance was required. The recruiter later discovered that the applicant's resume and profile matched those of Kenbun Yoshii, CEO of Tokyo-based IT firm Reunion Software. Analysis by multiple organizations including Okta and a Tokyo-based deepfake detection startup confirmed the interview video was likely AI-generated, citing irregularities such as unnatural hairline boundaries, brief eye misalignment, and mismatched lip movements with audio. Yoshii reported that publicly available images and videos of him appeared to have been used to create the fake identity, and he subsequently received multiple reports of similar applicants using his identity at other companies. According to Okta, over 6,500 similar cases have been identified globally in recent years, involving individuals believed to be North Korean IT workers using fake identities to obtain remote jobs at foreign companies. Trend Micro analysis found evidence that North Korean cyber groups have been experimenting with deepfake technology and producing large volumes of falsified resumes, particularly claiming full-stack engineering expertise.
Domain classification, causal taxonomy, severity scores, and national security assessments were LLM-classified and may contain errors.
Using AI systems to gain a personal advantage over others such as through cheating, fraud, scams, blackmail or targeted manipulation of beliefs or behavior. Examples include AI-facilitated plagiarism for research or education, impersonating a trusted or fake individual for illegitimate financial benefit, or creating humiliating or sexual imagery.
Human
Due to a decision or action made by humans
Intentional
Due to an expected outcome from pursuing a goal
Post-deployment
Occurring after the AI model has been trained and deployed