Instagram's advertising algorithm systematically delivered ads featuring a 5-year-old girl modeling children's jewelry to adult men instead of the intended female audience, attracting responses from convicted sex offenders and exposing children to predatory behavior.
The New York Times investigated how Instagram's advertising algorithm misdirected ads for children's jewelry featuring a 5-year-old girl model. Despite targeting topics like parenting, children, and ballet that Meta estimated as appealing mostly to women, the ads went almost entirely to adult men. Test ads run by The Times replicated this behavior, with photos showing the child going to men 95% of the time on average, while photos of items alone went to men 64% of the time. The ads received direct responses from dozens of Instagram users, including phone calls from two accused sex offenders, offers to pay the child for sexual acts, and professions of love. The Times identified four convicted sex offenders who had messaged the accounts, liked photos, or left comments. Instagram's algorithm appears to be connecting men with sexual interest in children to child-focused content. The investigation also revealed a broader ecosystem where thousands of parent-run Instagram accounts for young influencers attract sexualized comments from adult men, with one mother accepting that 92% of her daughter's followers were men as inevitable for influencer success. Meta's internal investigation found that nearly all subscribers to child accounts offering subscriptions demonstrated malicious behavior toward children.
Domain classification, causal taxonomy, severity scores, and national security assessments were LLM-classified and may contain errors.
Users anthropomorphizing, trusting, or relying on AI systems, leading to emotional or material dependence and inappropriate relationships with or expectations of AI systems. Trust can be exploited by malicious actors (e.g., to harvest personal information or enable manipulation), or result in harm from inappropriate use of AI in critical situations (e.g., medical emergency). Overreliance on AI systems can compromise autonomy and weaken social ties.
AI system
Due to a decision or action made by an AI system
Unintentional
Due to an unexpected outcome from pursuing a goal
Post-deployment
Occurring after the AI model has been trained and deployed