Google's AI program Gemini received 258 global user reports about generating deepfake terrorist content and 86 reports about creating child exploitation material over an 11-month period.
Google disclosed to the Australian eSafety Commission that between April 2023 and February 2024, its AI software Gemini received 258 user complaints globally about being used to create deepfake terrorism material and 86 reports about generating child abuse material. This disclosure came as part of mandatory reporting requirements under Australian law where tech firms must periodically inform the eSafety Commission about harm minimization efforts or face fines. The Australian eSafety Commission called this a 'world-first insight' into how users exploit AI technology to produce harmful and illegal content. Google stated it does not allow generation or distribution of content related to violent extremism, terrorism, or child exploitation. The company used hash-matching technology to identify and remove child abuse material made with Gemini but did not use the same system for terrorist content. Google clarified that the reported numbers represent total global user reports, not confirmed policy violations, and emphasized its commitment to expanding safety efforts.
Domain classification, causal taxonomy, severity scores, and national security assessments were LLM-classified and may contain errors.
Using AI systems to gain a personal advantage over others such as through cheating, fraud, scams, blackmail or targeted manipulation of beliefs or behavior. Examples include AI-facilitated plagiarism for research or education, impersonating a trusted or fake individual for illegitimate financial benefit, or creating humiliating or sexual imagery.
Human
Due to a decision or action made by humans
Intentional
Due to an expected outcome from pursuing a goal
Post-deployment
Occurring after the AI model has been trained and deployed
No population impact data reported.