Advances in AI have made powerful dual-use technologies like voice cloning, deep fakes, content generation, and data-gathering tools cheaper, more efficient, and easier to use. With modest hardware requirements, these technologies are now within the reach of a broader group of users, including those with malicious intent. Disinformation is already a serious issue and involves the deliberate propagation of false or misleading information, usually with the intent to cause harm, influence behavior, or achieve a financial or political advantage.
AI tools could be used to amplify the impact and scope of disinformation through more personalized, convincing, and far-reaching messaging. For example, the use of advanced AI in phishing schemes enables cybercriminals to automate the creation of highly sophisticated image, video, and audio communications. These communications can be tailored to individual recipients (sometimes including the cloned voice of a loved one), making them more likely to be successful and harder for both users and anti-phishing tools to detect. In the realm of surveillance, AI could support and enhance the mass gathering of personal data. Historically, mass surveillance required extensive manual effort. Machine learning tools can now link and process large datasets much more efficiently and cheaply than human analysts and can make predictions and decisions without human intervention. Through microtargeting, actors could manipulate individual behavior more subtly and effectively using AI-derived insights from their personal data and online behavior.
In the hands of nefarious state actors, such capabilities could be used to enhance the effectiveness of illegitimate domestic surveillance campaigns and to facilitate oppression and control. All of the capabilities mentioned above could converge to facilitate the large-scale manipulation and control of what people see, hear, and believe.
Excerpt from the MIT AI Risk Repository full report
Using AI systems to conduct large-scale disinformation campaigns, malicious surveillance, or targeted and sophisticated automated censorship and propaganda, with the aim of manipulating political processes, public opinion, and behavior.
Incident volume relative to governance coverage — each dot is one of 24 subdomains
Entity
Who or what caused the harm
Intent
Whether the harm was intentional or accidental
Timing
Whether the risk is pre- or post-deployment
A synthetic AI-generated image showing Thai Prime Minister Anutin Charnvirakul dining with accused money launderer Benjamin Mauerberger was spread on social media on the eve of Thailand's 2026 general election, creating false evidence of their relationship.
Developers: Google
Deployers: Csi La (facebook Page), Unknown Disinformation Actors
AI tools including Google's Gemini were used to create misleadingly altered images of a police shooting incident in Minneapolis, spreading false information about the events on social media.
Developers: Google
Deployers: Pro Trump Social Media Influencers, Unidentified Social Media Users, Partisan Online Accounts, Unknown Disinformation Actors
The White House posted a digitally altered image of a demonstrator using AI tools, showing her crying and with darkened skin, after she was arrested for interrupting a church service in Minnesota.
Developers: Unknown Deepfake Technology Developers, Unknown Image Generator Developers
Deployers: White House, White House Communications Team, Executive Office Of The President
Vulnerabilities that can be exploited in AI systems, software development toolchains, and hardware, resulting in unauthorized access, data and privacy breaches, or system manipulation causing unsafe outputs or behavior.
184 shared governance docs
Inadequate regulatory frameworks and oversight mechanisms that fail to keep pace with AI development, leading to ineffective governance and the inability to manage AI risks appropriately.
166 shared governance docs
AI systems that inadvertently generate or spread incorrect or deceptive information, which can lead to inaccurate beliefs in users and undermine their autonomy. Humans that make decisions based on false beliefs can experience physical, emotional or material harms
159 shared governance docs
AI systems that memorize and leak sensitive personal data or infer private information about individuals without their consent. Unexpected or unauthorized sharing of data and information can compromise user expectation of privacy, assist identity theft, or cause loss of confidential intellectual property.
158 shared governance docs
Establishes the Artificial Intelligence Futures Steering Committee by April 1, 2026, under the Secretary of Defense. Directs it to develop policies for AI adoption, assess AI trajectories, and analyze AI risks and adversary developments. Requires quarterly meetings and a report to U.S. Congress by January 31, 2027.
Prohibits the Department of Defense and its contractors from using covered artificial intelligence (AI) developed covered AI companies within 30 days of enactment. Allows Secretary of Defense to issue waivers for research or national security purposes with necessary risk mitigation steps. Provides definitions of covered AI systems, companies, and nations.
Prohibits the use of DeepSeek on intelligence community systems. Requires the Director of National Intelligence to develop removal standards and guidelines. Includes exceptions for national security and research, with risk mitigation. Aligns with existing information security requirements.