YouTube's recommendation algorithms and content ecosystem contributed to the radicalization of the Christchurch terrorist who killed 51 people in March 2019, by exposing him to extremist content and providing a pathway to increasingly radical material.
According to a New Zealand government report released in 2020, YouTube and other social media platforms were instrumental in radicalizing the terrorist who killed 51 worshippers in a March 2019 attack on two New Zealand mosques. The terrorist regularly watched extremist content on YouTube and claimed it was 'a far more significant source of information and inspiration' than commenting on extreme right-wing sites. He discovered 4chan at age 14 and spent considerable time in extremist corners of YouTube, following far-right creators like Stefan Molyneux. The report describes how YouTube's recommendation algorithms and business model created a 'step-ladder of amplification' where viewers were funneled into increasingly extreme content streams. Revenue tied to viewership incentivized creators to post increasingly inflammatory content to gain more viewers. The terrorist's radicalization pathway was described as 'entirely unexceptional' by experts, involving exposure to domestic violence, unsupervised computer access, and limited personal engagement that left scope for influence from extreme right-wing material found online. YouTube has since removed channels including Stefan Molyneux's in June 2020 and reports a 5x spike in hate videos removed, but experts argue the fundamental business model issues remain unaddressed.
Domain classification, causal taxonomy, severity scores, and national security assessments were LLM-classified and may contain errors.
AI that exposes users to harmful, abusive, unsafe or inappropriate content. May involve providing advice or encouraging action. Examples of toxic content include hate speech, violence, extremism, illegal acts, or child sexual abuse material, as well as content that violates community norms such as profanity, inflammatory political speech, or pornography.
AI system
Due to a decision or action made by an AI system
Unintentional
Due to an unexpected outcome from pursuing a goal
Post-deployment
Occurring after the AI model has been trained and deployed