A facial recognition system in Ningbo, China designed to identify and publicly shame jaywalkers mistakenly captured and displayed the face of prominent businesswoman Dong Mingzhu from an advertisement on the side of a passing bus.
In November 2018, an AI-powered facial recognition system deployed by traffic police in Ningbo, China to catch and publicly shame jaywalkers made a significant error. The system, developed by Shenzhen-based company Intellifusion and called 'DeepEye', captured the face of Dong Mingzhu, chairwoman of China's largest air conditioner manufacturer Gree Electric Appliances, from an advertisement on the side of a bus passing through an intersection. The system then displayed her image on a large public screen used to shame traffic violators, falsely identifying her as a jaywalker and even getting her surname wrong, showing 'Ju' instead of 'Dong'. The mistake was quickly spotted by citizens and went viral on Chinese social media platform Weibo. Ningbo traffic police acknowledged the error in a public statement, deleted the violation record, and claimed to have upgraded the system to prevent similar false recognitions in the future. The incident highlighted technical limitations in the facial recognition technology, specifically the lack of 'live detection' capabilities that would distinguish between real people and photographs. Gree Electric responded graciously, thanking the police for their hard work and calling for people to obey traffic rules. This occurred in the context of China's widespread deployment of facial recognition systems across multiple cities, with Shenzhen alone reporting nearly 14,000 jaywalkers identified at a single intersection over 10 months.
Domain classification, causal taxonomy, severity scores, and national security assessments were LLM-classified and may contain errors.
AI systems that fail to perform reliably or effectively under varying conditions, exposing them to errors and failures that can have significant consequences, especially in critical applications or areas that require moral reasoning.
AI system
Due to a decision or action made by an AI system
Unintentional
Due to an unexpected outcome from pursuing a goal
Post-deployment
Occurring after the AI model has been trained and deployed