Indian politicians used AI deepfake technology to create personalized voice clones and video messages for voter outreach during the 2024 general election, with over 50 million AI-generated calls made to voters who often did not realize they were interacting with artificial intelligence.
During India's 2024 general election, politicians across the ideological spectrum employed AI deepfake technology for voter outreach campaigns. Divyendra Singh Jadoun's company Polymath Synthetic Media Solutions created AI clones for politicians like Shakti Singh Rathore, charging $1,500 for digital avatars and $720 for audio clones. The technology was used to make personalized calls to voters in multiple languages, with recipients addressed by name and given tailored messages about local issues. Over 50 million AI-generated voice clone calls were made in the two months leading up to the April election start. Companies like iToConnect conducted 25 million personalized AI calls in Telangana and Andhra Pradesh alone. Politicians were resurrected using AI, including deceased politician Y.S. Rajasekhara Reddy endorsing his son, and H. Vasanthakumar endorsing his son Vijay Vasanth. The technology allowed politicians to reach voters in remote areas and communicate in India's 22 official languages and thousands of regional dialects. Many voters, particularly in rural areas, did not realize they were speaking with AI clones and expressed delight at receiving personal calls from candidates. The sanctioned deepfake industry became a $60 million business opportunity, with interactive AI calls still experiencing technical issues including delays, hallucinations, and poor pronunciation.
Domain classification, causal taxonomy, severity scores, and national security assessments were LLM-classified and may contain errors.
AI systems that inadvertently generate or spread incorrect or deceptive information, which can lead to inaccurate beliefs in users and undermine their autonomy. Humans that make decisions based on false beliefs can experience physical, emotional or material harms
AI system
Due to a decision or action made by an AI system
Intentional
Due to an expected outcome from pursuing a goal
Post-deployment
Occurring after the AI model has been trained and deployed