Facebook's automated advertising systems routinely misidentified and blocked adaptive fashion products designed for people with disabilities, treating them as medical devices and preventing small businesses from advertising on the platform.
Facebook's automated advertising center systematically rejected ads and product listings from adaptive clothing companies that serve people with disabilities. The incident affected at least seven small adaptive fashion companies over a period of at least two years, with some experiencing weekly rejections and one company having hundreds of products blocked. The AI system flagged products like hoodies with 'immunocompromised' messages, adaptive underwear, and clothing featuring models with disabilities as violations of policies against promoting 'medical and health care products and services including medical devices.' Companies had to appeal decisions on an item-by-item basis through largely automated processes, with appeals taking up to 10 days. The pattern was first publicly identified in 2018 when Slick Chicks experienced shadow banning. Similar issues occurred on other platforms including TikTok and Amazon. The automated systems appeared to focus on wheelchairs rather than clothing when reviewing seated-fit garments, and flagged disability-related terminology as medical content. Some companies gave up advertising on Facebook entirely due to resource constraints, while others hired expensive media agencies to access human reviewers.
Domain classification, causal taxonomy, severity scores, and national security assessments were LLM-classified and may contain errors.
Unequal treatment of individuals or groups by AI, often based on race, gender, or other sensitive characteristics, resulting in unfair outcomes and unfair representation of those groups.
AI system
Due to a decision or action made by an AI system
Unintentional
Due to an unexpected outcome from pursuing a goal
Post-deployment
Occurring after the AI model has been trained and deployed