A deepfake video was created using AI to falsely portray Philippine President Ferdinand Marcos Jr. ordering military attacks against China, which was later identified and debunked by government officials.
In November 2024, a deepfake video circulated online featuring manipulated audio designed to sound like Philippine President Ferdinand Marcos Jr. ordering military action against China in the West Philippine Sea. The video had a caption reading 'Atakehin ang China! Inutos na atakehin, PBBM may go-signal na' (Attack China! Order to attack has PBBM's go-signal). The Presidential Communications Office (PCO) identified and dismissed the content as a deepfake, stating that no such directive existed or had been made by the President. The video appeared on a YouTube account that was later terminated. Government officials suggested a foreign actor was likely behind the deepfake content and announced they would investigate and file cases against those responsible. The PCO coordinated with multiple agencies including the Department of Information and Communications Technology, National Security Council, and National Cybersecurity Inter-Agency Committee to address the proliferation of such AI-generated content. The incident occurred amid heightened tensions between the Philippines and China over territorial disputes in the South China Sea.
Domain classification, causal taxonomy, severity scores, and national security assessments were LLM-classified and may contain errors.
Using AI systems to conduct large-scale disinformation campaigns, malicious surveillance, or targeted and sophisticated automated censorship and propaganda, with the aim of manipulating political processes, public opinion, and behavior.
AI system
Due to a decision or action made by an AI system
Intentional
Due to an expected outcome from pursuing a goal
Post-deployment
Occurring after the AI model has been trained and deployed
No population impact data reported.