CNET published dozens of AI-generated articles containing significant factual errors about financial topics, requiring multiple corrections after the mistakes were discovered by other publications.
Between November 2022 and January 2023, tech publication CNET quietly published 78 articles generated by an unnamed proprietary AI engine developed by its parent company Red Ventures. The articles, primarily basic financial explainers, were initially attributed to 'CNET Money Staff' with minimal disclosure that they were AI-generated. The practice was discovered by internet users and reported by Futurism in January 2023. Upon scrutiny, the AI-generated content contained numerous factual errors, including incorrect calculations about compound interest, misstatements about loan payments, and false information about certificate of deposit terms. For example, one article incorrectly stated that a $10,000 deposit earning 3% interest would earn $10,300 after the first year, when the actual interest earned would be $300. CNET was forced to issue lengthy corrections to multiple articles and began reviewing all AI-assisted content for accuracy. The articles appeared designed to optimize search engine rankings and capture advertising revenue rather than provide original reporting. CNET eventually paused the AI article program following widespread criticism about journalistic standards and transparency.
Domain classification, causal taxonomy, severity scores, and national security assessments were LLM-classified and may contain errors.
AI systems that inadvertently generate or spread incorrect or deceptive information, which can lead to inaccurate beliefs in users and undermine their autonomy. Humans that make decisions based on false beliefs can experience physical, emotional or material harms
AI system
Due to a decision or action made by an AI system
Unintentional
Due to an unexpected outcome from pursuing a goal
Post-deployment
Occurring after the AI model has been trained and deployed