AI capabilities have the potential to be exploited for personal gain at the expense of others via deception and manipulation. This can take various forms including cheating, fraud, scams, and the use of deepfakes for blackmail or humiliation. It is currently very difficult to distinguish human text from text that is AI-generated. This increases opportunities for cheating in settings where rewards depend on the communication of original thought. In academia, students may use AI to quickly generate essays or other coursework and claim it as their own. If students regularly and inappropriately rely on AI for their schooling, this could undermine academic integrity and genuine intellectual development. In science, researchers could use AI unscrupulously to produce professional outputs. If widely adopted, this practice could dilute the overall quality of scientific discourse.
Generative AI products may also be used to increase the reach and potency of various dishonest schemes. Advanced AI assistants can produce HTML, CSS, and other web development languages, allowing for the rapid creation of convincing fraudulent websites and applications at scale. In the context of social media, generative adversarial networks (GANs) have been used to create images of human faces that look authentic. AI models can also be trained on speech or writing data from a specific individual, allowing the model to impersonate someone very convincingly without consent. Scammers could use this capability to request sensitive information or financial aid by pretending to be a trusted contact. A particularly damaging type of abuse facilitated by deep fakes involves creating non-consensual sexual imagery with the intent to cause a subject social injury or manipulate them into performing desired actions. Even if a deep fake is exposed as inauthentic, it can continue to impact a person's life in significant ways through the loss of job opportunities, social isolation, and ongoing harassment or defamation.
Excerpt from the MIT AI Risk Repository full report
Using AI systems to gain a personal advantage over others such as through cheating, fraud, scams, blackmail or targeted manipulation of beliefs or behavior. Examples include AI-facilitated plagiarism for research or education, impersonating a trusted or fake individual for illegitimate financial benefit, or creating humiliating or sexual imagery.
Incident volume relative to governance coverage — each dot is one of 24 subdomains
Entity
Who or what caused the harm
Intent
Whether the harm was intentional or accidental
Timing
Whether the risk is pre- or post-deployment
An autonomous AI agent named MJ Rathbun published a personalized hit piece attacking a software maintainer's reputation after its code contribution was rejected, representing the first documented case of AI-driven reputational blackmail in the wild.
Developers: Openclaw, Moltbook
Deployers: Unknown Deployer Of Mj Rathbun, Mj Rathbun
Malicious actors created and distributed hundreds of fake OpenClaw AI skills that appeared legitimate but contained hidden malware, credential theft capabilities, and cryptocurrency wallet hijacking code, affecting users who installed these compromised automation tools.
Developers: Unknown Malicious Actors, Openclaw
Deployers: Unknown Threat Actors Distributing Malicious Openclaw Skills, Unknown Threat Actors, Unknown Malicious Actors
AI-generated deepfake images falsely depicted TV presenter Kate Garraway with fictitious romantic partners, causing emotional distress to her children and spreading misinformation about her personal life.
Developers: Unknown Deepfake Technology Developers, Unknown Image Generator Developers
Deployers: Unknown Malicious Actors
AI systems that inadvertently generate or spread incorrect or deceptive information, which can lead to inaccurate beliefs in users and undermine their autonomy. Humans that make decisions based on false beliefs can experience physical, emotional or material harms
186 shared governance docs
AI systems that memorize and leak sensitive personal data or infer private information about individuals without their consent. Unexpected or unauthorized sharing of data and information can compromise user expectation of privacy, assist identity theft, or cause loss of confidential intellectual property.
182 shared governance docs
Vulnerabilities that can be exploited in AI systems, software development toolchains, and hardware, resulting in unauthorized access, data and privacy breaches, or system manipulation causing unsafe outputs or behavior.
174 shared governance docs
Challenges in understanding or explaining the decision-making processes of AI systems, which can lead to mistrust, difficulty in enforcing compliance standards or holding relevant actors accountable for harms, and the inability to identify and correct errors.
173 shared governance docs
Requires the Secretary of Defense to develop a cybersecurity policy for AI/ML systems no later than 180 days after the act is passed. Develop a comprehensive review of the effectiveness of the AI/ML policies. Addresses potential security risks, implements methods to mitigate those risks, and establishes standard policy. Requires a comprehensive report of the threats and cybersecurity measures by August 31, 2026.
Requires large frontier developers to implement and publish frontier AI frameworks, assess catastrophic risks, and publish transparency reports; requires the Office of Emergency Services to establish reporting mechanisms for critical safety incidents and catastrophic risk assessments; establishes a consortium to develop a framework for the creation of CalCompute; creates civil penalties for violations of this chapter.
Encourages AI innovation by removing regulations, revising funding based on states' AI climate, and reviewing FTC actions. Promotes free speech in AI systems, revises procurement guidelines, and evaluates international AI models. Supports open-source AI use, workforce retraining, and safeguards against deepfakes. Advances AI infrastructure development, cybersecurity, international diplomacy, and semiconductor manufacturing. Prioritizes AI R&D, interpretability, evaluations, national security assessments, and biosecurity measures.