Major AI chatbots including ChatGPT, Copilot, Gemini, DeepSeek, and Grok were found to reproduce Chinese Communist Party propaganda and censorship when prompted on sensitive topics, with responses varying significantly between English and Chinese prompts.
The American Security Project analyzed five popular large language model chatbots: OpenAI's ChatGPT, Microsoft's Copilot, Google's Gemini, DeepSeek's R1, and xAI's Grok. Investigators prompted each model in both English and Simplified Chinese on subjects that the People's Republic of China considers controversial. Every AI chatbot examined was found to sometimes return responses indicative of CCP-aligned censorship and bias. Microsoft's Copilot was identified as appearing more likely than other US models to present CCP propaganda and disinformation as authoritative or on equal footing with true information. The investigation revealed significant discrepancies in how the AI chatbots responded depending on the language of the prompt. When asked about COVID-19 origins in English, most models outlined scientific theories including potential lab leak, but in Chinese all models described it as an 'unsolved mystery' or 'natural spillover event'. Similar divergences occurred regarding Hong Kong's freedoms and the Tiananmen Square Massacre, with Chinese responses being sanitized and aligned with CCP framing. The root cause was identified as CCP manipulation of global AI training data through astroturfing and state media amplification, contaminating the datasets used to train these models.
Domain classification, causal taxonomy, severity scores, and national security assessments were LLM-classified and may contain errors.
AI systems that inadvertently generate or spread incorrect or deceptive information, which can lead to inaccurate beliefs in users and undermine their autonomy. Humans that make decisions based on false beliefs can experience physical, emotional or material harms
AI system
Due to a decision or action made by an AI system
Unintentional
Due to an unexpected outcome from pursuing a goal
Post-deployment
Occurring after the AI model has been trained and deployed
No population impact data reported.