A deepfake video falsely showing President Biden making transphobic remarks was created using AI voice cloning technology and spread on social media, garnering thousands of views before being debunked by fact-checkers.
In February 2023, a deepfake video was created showing President Joe Biden appearing to make transphobic remarks during what was actually a January 25, 2023 speech about U.S. support for Ukraine. The video used AI voice cloning technology, specifically from ElevenLabs' voice synthesis platform, to generate fake audio of Biden saying phrases like 'You will never be a real woman' and other derogatory comments about transgender people. The original video showed Biden discussing tanks and NATO support for Ukraine, but the audio was replaced with the AI-generated transphobic content and his mouth movements were manipulated to match. The deepfake was created by a Twitter user @MachiavelliMemz and spread across social media platforms including Instagram and Facebook, where it garnered over 12,000 likes in five days. Multiple fact-checking organizations including USA TODAY, Reuters, Lead Stories, and PolitiFact debunked the video. Digital forensics experts confirmed it was a deepfake created using voice cloning algorithms and lip-syncing technology. The incident highlighted concerns about the democratization of deepfake technology and its potential for spreading disinformation.
Domain classification, causal taxonomy, severity scores, and national security assessments were LLM-classified and may contain errors.
Using AI systems to conduct large-scale disinformation campaigns, malicious surveillance, or targeted and sophisticated automated censorship and propaganda, with the aim of manipulating political processes, public opinion, and behavior.
AI system
Due to a decision or action made by an AI system
Intentional
Due to an expected outcome from pursuing a goal
Post-deployment
Occurring after the AI model has been trained and deployed