Ars Technica published and then retracted an article containing AI-generated fabricated quotes attributed to a real person, leading to the termination of the reporter who used ChatGPT to paraphrase source material while working sick.
In February 2026, Ars Technica published an article about an AI agent allegedly publishing a hit piece against programmer Scott Shambaugh after he rejected its code contribution to matplotlib. The article included quotes attributed to Shambaugh that he never said or wrote. Reporter Benj Edwards later admitted he was sick with fever and used ChatGPT to help extract source material, but inadvertently ended up with AI-paraphrased quotes rather than actual quotes from Shambaugh's blog. Shambaugh discovered the fabricated quotes and updated his blog to clarify he never spoke to Ars Technica. Editor-in-chief Ken Fisher published an apology stating this was a 'serious failure of standards' and violated their policy against AI-generated content unless clearly labeled. The article was fully retracted and Edwards was subsequently terminated from his position as senior AI reporter. The incident highlighted broader concerns about journalists being pressured to use AI tools while remaining accountable for AI-generated errors.
Domain classification, causal taxonomy, severity scores, and national security assessments were LLM-classified and may contain errors.
AI systems that inadvertently generate or spread incorrect or deceptive information, which can lead to inaccurate beliefs in users and undermine their autonomy. Humans that make decisions based on false beliefs can experience physical, emotional or material harms
Human
Due to a decision or action made by humans
Unintentional
Due to an unexpected outcome from pursuing a goal
Post-deployment
Occurring after the AI model has been trained and deployed