Meta Platforms continued running ads on Facebook and Instagram that directed users to illegal drug marketplaces, with the company's AI content moderation systems failing to stop hundreds of such ads despite federal investigation.
Meta Platforms has been running ads on Facebook and Instagram that steer users to online marketplaces for illegal drugs, months after The Wall Street Journal first reported a federal investigation into the practice. The company continued collecting revenue from ads violating its policies that ban promoting illicit drug sales. A July review by the Journal found dozens of ads marketing substances like cocaine and prescription opioids, while the Tech Transparency Project found over 450 such ads from March to June. The ads showed photos of prescription bottles, pills, and drugs, with text like 'Place your orders' and included images of cocaine arranged to spell 'DMT'. Meta uses artificial intelligence tools to moderate content, but these systems failed to stop the drug ads, which often redirect users to Telegram or WhatsApp group chats for purchases. The company had workforce reductions in content moderation teams due to layoffs. One tragic case involved 15-year-old Elijah Ott from California, who died from fentanyl overdose after purchasing drugs through Instagram contacts. His mother found messages showing how he connected with Instagram drug dealers seeking marijuana oil and Xanax-like pharmaceuticals. The autopsy revealed fentanyl as the cause of death, with the mother believing the purchased drugs were laced with fentanyl. Meta disabled many ads within 48 hours of being identified and banned violating users after being contacted by journalists.
Domain classification, causal taxonomy, severity scores, and national security assessments were LLM-classified and may contain errors.
AI systems that fail to perform reliably or effectively under varying conditions, exposing them to errors and failures that can have significant consequences, especially in critical applications or areas that require moral reasoning.
AI system
Due to a decision or action made by an AI system
Unintentional
Due to an unexpected outcome from pursuing a goal
Post-deployment
Occurring after the AI model has been trained and deployed