A deepfake video was created using AI technology showing U.S. State Department spokesman Matthew Miller making false statements about Ukraine policy and Russian targets, which was then spread through Russian media channels and Telegram to mislead audiences.
A 49-second deepfake video was created using AI technology that manipulated actual footage of U.S. State Department spokesman Matthew Miller to show him making fabricated statements about Ukraine policy toward Russia. The video falsely depicted Miller claiming that the Russian city of Belgorod had 'essentially no civilians remaining' and was 'practically full of military targets,' suggesting Western support for indiscriminate strikes. The deepfake also showed Miller responding to a manufactured reporter question about other countries allowing weapons strikes deep within Russian territory. The video contained telltale signs of manipulation including mismatched lip sync and changing shirt and tie colors midway through. It circulated widely on Telegram channels followed by Belgorod residents and was disseminated by several Russian media outlets and websites without acknowledging it was fake. Russian officials, including the chairman of Russia's Human Rights Council Valery Fadeyev, responded to the video's false claims as if they were authentic, with state news agency TASS running articles based on the fabricated content. U.S. officials had no information about the video's origins but expressed concern about Russia's use of such techniques for disinformation campaigns.
Domain classification, causal taxonomy, severity scores, and national security assessments were LLM-classified and may contain errors.
Using AI systems to conduct large-scale disinformation campaigns, malicious surveillance, or targeted and sophisticated automated censorship and propaganda, with the aim of manipulating political processes, public opinion, and behavior.
AI system
Due to a decision or action made by an AI system
Intentional
Due to an expected outcome from pursuing a goal
Post-deployment
Occurring after the AI model has been trained and deployed