NewsGuard identified 49 websites across seven languages that were entirely or mostly generated by AI language models to create content farms that churn out clickbait articles for advertising revenue, with some publishing false information including a fake report about President Biden's death.
In April 2023, NewsGuard identified 49 websites spanning seven languages that appear to be entirely or mostly generated by artificial intelligence language models designed to mimic human communication in the form of news websites. The websites produce high volumes of content on various topics including politics, health, entertainment, finance, and technology, with some publishing hundreds of articles daily. Many sites are saturated with advertisements, indicating they were designed to generate revenue from programmatic advertising. The AI-generated articles often consist of content summarized or rewritten from other sources, with bland language and repetitive phrases typical of AI generation. All 49 sites had published at least one article containing error messages commonly found in AI-generated texts, such as 'my cutoff date in September 2021' and 'I cannot complete this prompt.' Some sites advanced false narratives, including CelebritiesDeaths.com which published a fake article claiming President Biden had died, though the article then revealed it was AI-generated and against OpenAI's use policies. The sites typically use generic names suggesting established publishers, feature fake author profiles with stolen photos, and include algorithmically generated About and Privacy Policy pages with unfilled template fields. NewsGuard contacted 29 sites with available contact information, and only two confirmed using AI, while most others did not respond or provided invalid contact details.
Domain classification, causal taxonomy, severity scores, and national security assessments were LLM-classified and may contain errors.
AI systems that inadvertently generate or spread incorrect or deceptive information, which can lead to inaccurate beliefs in users and undermine their autonomy. Humans that make decisions based on false beliefs can experience physical, emotional or material harms
Human
Due to a decision or action made by humans
Intentional
Due to an expected outcome from pursuing a goal
Post-deployment
Occurring after the AI model has been trained and deployed