Tesla vehicles using Autopilot semi-autonomous driving technology crashed into multiple stationary emergency vehicles and other objects, causing injuries and fatalities due to the AI system's inability to reliably detect stationary objects.
Multiple Tesla vehicles equipped with Autopilot semi-autonomous driving technology crashed into stationary emergency vehicles and other objects between 2016-2022. The incidents include a Tesla hitting a parked Laguna Beach Police Department vehicle in California where the driver suffered minor injuries, similar crashes into fire trucks in Utah and Culver City, and fatal accidents including a 2019 crash in Gardena where a Tesla ran a red light killing two people (Gilberto Alcazar Lopez and Maria Guadalupe Nieves). The reports describe at least 10 crashes over three years involving Teslas hitting stationary police cars. Tesla's Autopilot system consistently struggled with detecting stationary objects and flashing lights, leading to the AI steering vehicles directly into parked emergency vehicles. The system was described as beta software by Tesla, with the company acknowledging the stationary object detection problem while maintaining drivers must keep hands on wheel and remain alert. Dave Key, a Tesla owner whose car hit a police SUV in 2018, described the crash as due to a 'known software limitation' in detecting immobile objects. The crashes occurred despite Tesla's safety warnings and driver monitoring requirements.
Domain classification, causal taxonomy, severity scores, and national security assessments were LLM-classified and may contain errors.
AI systems that fail to perform reliably or effectively under varying conditions, exposing them to errors and failures that can have significant consequences, especially in critical applications or areas that require moral reasoning.
AI system
Due to a decision or action made by an AI system
Unintentional
Due to an unexpected outcome from pursuing a goal
Post-deployment
Occurring after the AI model has been trained and deployed