Multiple Tesla vehicles equipped with Autopilot driver-assistance technology were involved in fatal and injury crashes, including collisions with stationary vehicles, barriers, and pedestrians, raising questions about the safety and limitations of semi-autonomous driving systems.
This report documents multiple incidents involving Tesla vehicles equipped with Autopilot, a Level 2 semi-autonomous driving system that handles steering and speed control but requires driver attention. Key incidents include: a fatal March 2018 crash in Mountain View, California where a Model X struck a concrete barrier at 71 mph, killing driver Walter Huang; a May 2016 fatal crash in Florida where a Model S collided with a tractor-trailer, killing Joshua Brown; multiple crashes into stationary emergency vehicles including fire trucks and police cars; and an Uber self-driving vehicle fatality in Arizona in March 2018. The Tesla crashes occurred despite Autopilot being engaged, with investigations revealing drivers often had their hands off the steering wheel and ignored multiple warnings. Tesla has faced criticism for its response to crashes, often blaming driver inattention while defending Autopilot's safety record. The company claims Autopilot reduces crash rates by 40% and results in one fatality per 320 million miles compared to one per 86 million miles for conventional vehicles. However, safety experts question these statistics and argue the comparison methodology is flawed. The National Transportation Safety Board has conducted multiple investigations and criticized Tesla for releasing information during ongoing probes. Several incidents involved drivers who were distracted, watching movies, or had complained about Autopilot's behavior at specific locations before fatal crashes occurred.
Domain classification, causal taxonomy, severity scores, and national security assessments were LLM-classified and may contain errors.
AI systems that fail to perform reliably or effectively under varying conditions, exposing them to errors and failures that can have significant consequences, especially in critical applications or areas that require moral reasoning.
AI system
Due to a decision or action made by an AI system
Unintentional
Due to an unexpected outcome from pursuing a goal
Post-deployment
Occurring after the AI model has been trained and deployed