Amazon's recommendation algorithm began suggesting suicide-related products to customers who purchased sodium nitrite after multiple people used the chemical compound bought through the platform to die by suicide.
Amazon's recommendation algorithm identified patterns in customer purchasing behavior related to sodium nitrite, a food preservative that was being used for suicide. After multiple people purchased the chemical compound through Amazon and used it to kill themselves, the platform's algorithm began suggesting complementary products that customers frequently bought together, including scales for measuring doses, anti-nausea medication, and suicide instruction materials. The New York Times identified 10 people who killed themselves using sodium nitrite purchased through Amazon in a two-year period, including teenagers and young adults. Despite receiving explicit complaints from family members and others alerting Amazon to the deaths and requesting removal of the product, the company declined to act. The algorithm's suggestions effectively created 'suicide kits' by recommending the chemical along with other items needed to use it lethally. Congressional lawmakers have demanded answers from Amazon about these sales and algorithmic recommendations. The incident highlights how AI recommendation systems can inadvertently facilitate harmful behaviors by identifying and promoting dangerous product combinations based on purchasing patterns.
Domain classification, causal taxonomy, severity scores, and national security assessments were LLM-classified and may contain errors.
Users anthropomorphizing, trusting, or relying on AI systems, leading to emotional or material dependence and inappropriate relationships with or expectations of AI systems. Trust can be exploited by malicious actors (e.g., to harvest personal information or enable manipulation), or result in harm from inappropriate use of AI in critical situations (e.g., medical emergency). Overreliance on AI systems can compromise autonomy and weaken social ties.
AI system
Due to a decision or action made by an AI system
Unintentional
Due to an unexpected outcome from pursuing a goal
Post-deployment
Occurring after the AI model has been trained and deployed