Uber's self-driving cars in San Francisco ran multiple red lights in December 2016, with the company initially blaming human error but internal documents later revealing the AI mapping system failed to recognize traffic signals.
In December 2016, Uber launched a self-driving car pilot program in San Francisco using Volvo XC90s and Ford Fusions without obtaining required permits from the California DMV. On December 14, 2016, multiple incidents occurred where Uber's autonomous vehicles ran red lights, including one captured on video by a taxi driver showing the car proceeding through an intersection approximately three seconds after the light turned red, with a pedestrian entering the crosswalk. Uber initially blamed 'human error' and suspended the safety drivers, claiming the violations occurred when humans were controlling the vehicles. However, internal company documents and employee sources later revealed to The New York Times that the autonomous driving system was actually in control during these incidents. The mapping programs used by Uber's cars failed to recognize at least six traffic lights in the San Francisco area, causing the vehicles to proceed through red signals. The cars also failed to properly navigate bike lanes and were observed running stop signs and failing to yield to pedestrians. California DMV officials revoked the registrations of 16 Uber test vehicles on December 23, 2016, after determining they were improperly registered. Uber subsequently moved its testing program to Arizona rather than comply with California regulations. The incidents exposed both technical failures in the AI system and corporate dishonesty about the causes.
Domain classification, causal taxonomy, severity scores, and national security assessments were LLM-classified and may contain errors.
AI systems that fail to perform reliably or effectively under varying conditions, exposing them to errors and failures that can have significant consequences, especially in critical applications or areas that require moral reasoning.
AI system
Due to a decision or action made by an AI system
Unintentional
Due to an unexpected outcome from pursuing a goal
Post-deployment
Occurring after the AI model has been trained and deployed