Researchers conducted algorithmic audits of Amazon's search and recommendation systems and found that 10.47% of vaccine-related search results promoted misinformative health products, with evidence of filter-bubble effects where users clicking on misinformative products received more misinformation recommendations.
Researchers from the University of Washington conducted systematic algorithmic audits of Amazon's search and recommendation algorithms to investigate vaccine misinformation on the platform. The study involved two sets of audits: unpersonalized audits examining search results for 48 vaccine-related queries across 10 topics without logging in, and personalized audits analyzing how user history affects recommendations. The unpersonalized audits ran for 15 consecutive days using 5 different Amazon sorting filters, resulting in 36,000 search results and 16,815 product page recommendations. The researchers found that 10.47% of search results promoted misinformative health products, with Amazon ranking misinformative results higher than debunking results. The personalized audits revealed filter-bubble effects where accounts performing actions on misinformative products received more misinformation in their homepage, product page, and pre-purchase recommendations. The study identified misinformative products across multiple categories including books, Kindle eBooks, Amazon Fashion items, and Health & Personal care products. The research resulted in a dataset of 4,997 Amazon products annotated for health misinformation, demonstrating that Amazon's algorithms amplify vaccine misinformation through both search rankings and personalized recommendations.
Domain classification, causal taxonomy, severity scores, and national security assessments were LLM-classified and may contain errors.
AI systems that inadvertently generate or spread incorrect or deceptive information, which can lead to inaccurate beliefs in users and undermine their autonomy. Humans that make decisions based on false beliefs can experience physical, emotional or material harms
AI system
Due to a decision or action made by an AI system
Unintentional
Due to an unexpected outcome from pursuing a goal
Post-deployment
Occurring after the AI model has been trained and deployed
No population impact data reported.