Researchers discovered over 1,000 LinkedIn profiles using AI-generated deepfake profile photos for sales and marketing purposes, with companies using these fake personas to circumvent LinkedIn's messaging limits and conduct lead generation campaigns.
Stanford Internet Observatory researchers Renée DiResta and Josh Goldstein discovered more than 1,000 LinkedIn profiles using AI-generated facial images created by generative adversarial networks (GANs). The investigation began when DiResta received a suspicious message from 'Keenan Ramsey' whose profile photo had telltale signs of AI generation including perfectly centered eyes, missing earrings, and blurry backgrounds. The fake profiles were primarily used for business development and sales purposes, allowing companies to send marketing messages without hiring additional staff or hitting LinkedIn's messaging limits. Over 70 businesses were listed as employers on these fake profiles, with many companies claiming they had hired outside vendors for lead generation but were unaware of the use of AI-generated images. Companies like RingCentral had 60 fake profiles claiming employment there, though none of the individuals actually worked for the company. Universities contacted by NPR confirmed none of the educational credentials listed were legitimate. LinkedIn investigated and removed the profiles that violated its policies against fake accounts and falsified information. The practice appears to have grown during the pandemic as businesses sought new ways to generate online sales leads when in-person meetings became difficult.
Domain classification, causal taxonomy, severity scores, and national security assessments were LLM-classified and may contain errors.
Using AI systems to gain a personal advantage over others such as through cheating, fraud, scams, blackmail or targeted manipulation of beliefs or behavior. Examples include AI-facilitated plagiarism for research or education, impersonating a trusted or fake individual for illegitimate financial benefit, or creating humiliating or sexual imagery.
Human
Due to a decision or action made by humans
Intentional
Due to an expected outcome from pursuing a goal
Post-deployment
Occurring after the AI model has been trained and deployed