CNET published dozens of AI-generated articles containing factual errors and plagiarized content without proper disclosure, leading to corrections and temporary suspension of the AI program.
CNET, owned by Red Ventures, quietly published approximately 73-78 AI-generated articles about personal finance topics over several months under bylines like 'CNET Money Staff' without clear disclosure that they were AI-written. The articles were discovered by Futurism in January 2023, which found that many contained factual errors and plagiarized content from competitors like Forbes, Investopedia, and Bankrate. The AI system appeared to function more like an automated paraphrasing tool, making minor word substitutions to existing articles rather than generating original content. CNET's plagiarism detection tools either failed or were not properly used by editors. After the public backlash, CNET issued corrections to multiple articles, added warnings about content under review, and temporarily paused the AI program. The company indicated it planned to resume using AI for content generation after improving editorial processes. Red Ventures also used similar AI tools across other properties including Bankrate and CreditCards.com as part of an SEO-driven content strategy designed to rank highly in Google searches and generate affiliate marketing revenue.
Domain classification, causal taxonomy, severity scores, and national security assessments were LLM-classified and may contain errors.
AI systems that inadvertently generate or spread incorrect or deceptive information, which can lead to inaccurate beliefs in users and undermine their autonomy. Humans that make decisions based on false beliefs can experience physical, emotional or material harms
AI system
Due to a decision or action made by an AI system
Unintentional
Due to an unexpected outcome from pursuing a goal
Post-deployment
Occurring after the AI model has been trained and deployed