TikTok's recommendation algorithm promoted misogynistic content from Andrew Tate to young users despite the platform's policies banning such content, with videos of Tate being viewed 11.6 billion times through a coordinated manipulation campaign by his followers.
An Observer investigation revealed that TikTok's algorithm was actively promoting misogynistic content from Andrew Tate to young users, including those as young as 13. The investigation involved creating a fake account for an 18-year-old male user, which after watching just two Tate videos began receiving a flood of his content - with 8 out of 20 videos being Tate content when checking a week later. Videos of Tate have been watched 11.6 billion times on the platform. The content spread through a coordinated campaign by members of Tate's Hustler's University, who were instructed to flood social media with controversial Tate clips to maximize engagement and recruit new members paying 39 monthly fees. Despite TikTok's community guidelines explicitly banning misogyny and impersonation accounts, hundreds of copycat accounts using Tate's name and image continued operating. The promoted content included videos where Tate described women as property, blamed feminism for men's problems, called therapy seekers 'useless', and described controlling behavior toward girlfriends. Experts warned this aggressive algorithmic promotion could radicalize young male users, with domestic abuse campaigners describing the content as extreme misogyny capable of causing offline harm.
Domain classification, causal taxonomy, severity scores, and national security assessments were LLM-classified and may contain errors.
AI that exposes users to harmful, abusive, unsafe or inappropriate content. May involve providing advice or encouraging action. Examples of toxic content include hate speech, violence, extremism, illegal acts, or child sexual abuse material, as well as content that violates community norms such as profanity, inflammatory political speech, or pornography.
AI system
Due to a decision or action made by an AI system
Unintentional
Due to an unexpected outcome from pursuing a goal
Post-deployment
Occurring after the AI model has been trained and deployed