Facebook's AI-powered content moderation system incorrectly flagged legitimate eyewitness images and videos of the Lekki toll gate massacre in Lagos, Nigeria as misinformation, labeling real documentation of military violence against protesters as 'false information'.
On October 21, 2020, Facebook and Instagram flagged several posts containing images related to the Lekki toll gate incident in Lagos, Nigeria as misinformation. The incident occurred on October 20, 2020, when military personnel shot into a crowd of #EndSARS protesters who remained at the protest site after a 9 PM curfew was imposed. Eyewitness reports described men in military uniforms approaching and shooting protesters who were waving Nigerian flags and singing the national anthem. Live footage was shared on Instagram by DJ Switch showing gunshot victims and bodies wrapped in bloodied flags. Facebook uses a hybrid system of human moderators and AI to check misinformation, partnering with certified third-party fact-checking organizations including Africa Check Nigeria, AFP Nigeria, and Dubawa. The flagged content included images of LCC staff allegedly removing CCTV cameras, protesters holding Nigerian flags, bloodied flags, survivors at hospitals, and corpses from the scene. These images were labeled with cautions reading 'False Information. The same information was checked in another post by independent fact-checkers.' The content visibility was reduced across timelines, though the images were not completely removed. Ironically, AFP, one of Facebook's fact-checking partners, acknowledged Amnesty International Nigeria's report about the killings in Lagos, creating confusion about how Facebook reached its misinformation conclusion.
Domain classification, causal taxonomy, severity scores, and national security assessments were LLM-classified and may contain errors.
AI systems that inadvertently generate or spread incorrect or deceptive information, which can lead to inaccurate beliefs in users and undermine their autonomy. Humans that make decisions based on false beliefs can experience physical, emotional or material harms
AI system
Due to a decision or action made by an AI system
Unintentional
Due to an unexpected outcome from pursuing a goal
Post-deployment
Occurring after the AI model has been trained and deployed
No population impact data reported.