Facebook's automated content moderation system incorrectly removed a historical photograph of Aboriginal men in chains from the 1890s, flagging it for nudity when it was being used to refute the Australian Prime Minister's claim that there was no slavery in Australia.
In June 2020, Facebook's automated content moderation system incorrectly removed a historical photograph from the 1890s showing Aboriginal men in chains. The image was posted by an Australian user to refute Prime Minister Scott Morrison's claim that Australia had never had slavery - comments Morrison retracted a day later. Facebook deleted the post and restricted the user's account for 24 hours, claiming the photo violated community standards due to nudity. The Guardian reported that dozens of other Facebook users experienced the same problems when attempting to share links to Guardian articles containing the image, with some receiving bans of up to 30 days. Facebook apologized after Guardian Australia inquired about the removal, stating it was an automated process error. A Facebook spokeswoman said the company had fewer workers available to review takedowns due to the coronavirus pandemic. The incident highlighted concerns about automated censorship being unevenly distributed, with some arguing that minority groups are more likely to have their content censored. Facebook eventually allowed the article to be shared without restrictions on June 15, 2020.
Domain classification, causal taxonomy, severity scores, and national security assessments were LLM-classified and may contain errors.
Accuracy and effectiveness of AI decisions and actions are dependent on group membership, where decisions in AI system design and biased training data lead to unequal outcomes, reduced benefits, increased effort, and alienation of users.
AI system
Due to a decision or action made by an AI system
Unintentional
Due to an unexpected outcome from pursuing a goal
Post-deployment
Occurring after the AI model has been trained and deployed