Hoodline, a local news network, deployed AI to generate articles under fake human bylines with fabricated author photos and biographies, misleading readers about the source of their content.
Hoodline, originally founded in 2014 as a San Francisco-based hyper-local news outlet, began using AI to generate articles under fake human bylines in recent years. The site created fictional author personas with names like Sarah Kim, Jake Rodriguez, and Mitch M. Rosenthal, initially accompanied by AI-generated headshots and fake biographical information. The outlet expanded into a national network covering major cities across the country, drawing millions of readers monthly. Screenshots from the Internet Archive showed these fake author profiles included detailed biographies claiming local expertise. After criticism, Hoodline removed the fake photos and biographies but retained the human names with small 'AI' badges. The company's CEO Zachary Chen defended the practices, stating the outlet employs dozens of editors and journalists, though at least two employees were reportedly based in the Philippines rather than the US cities being covered. The News/Media Alliance suggested this content likely violates copyright law by repurposing existing news content. Experts criticized the approach as deliberately deceptive, undermining trust in local journalism and potentially accelerating misinformation spread.
Domain classification, causal taxonomy, severity scores, and national security assessments were LLM-classified and may contain errors.
AI systems that inadvertently generate or spread incorrect or deceptive information, which can lead to inaccurate beliefs in users and undermine their autonomy. Humans that make decisions based on false beliefs can experience physical, emotional or material harms
Human
Due to a decision or action made by humans
Intentional
Due to an expected outcome from pursuing a goal
Post-deployment
Occurring after the AI model has been trained and deployed