Chinese social media platforms implemented AI-powered censorship systems that blocked discussions of sensitive topics, prompting users to develop creative linguistic workarounds using homophones, visual puns, and coded language to evade automated content moderation.
In 2018 and continuing through 2022, Chinese social media platforms including Weibo implemented AI-powered content moderation systems to automatically detect and censor discussions of sensitive political and social topics. The #MeToo hashtag was blocked, along with terms related to COVID-19 policies, government corruption, and protests. Users responded by developing sophisticated linguistic evasion techniques, using homophones like 'rice bunny' for MeToo, 'Netherlands' for Henan province protests, and visual character modifications. In July 2022, Weibo announced efforts to 'clean up' intentionally misspelled words and homophones, stating they would refine their 'keyword identification model' to better filter coded language. The report describes dozens of examples of censored terms and user workarounds, from 'grass mud horse' for censorship itself to 'green horse' for COVID health codes. The scale affects millions of Chinese social media users who must constantly adapt their language to discuss important social and political issues.
Domain classification, causal taxonomy, severity scores, and national security assessments were LLM-classified and may contain errors.
Using AI systems to conduct large-scale disinformation campaigns, malicious surveillance, or targeted and sophisticated automated censorship and propaganda, with the aim of manipulating political processes, public opinion, and behavior.
AI system
Due to a decision or action made by an AI system
Intentional
Due to an expected outcome from pursuing a goal
Post-deployment
Occurring after the AI model has been trained and deployed