Tesla's Autopilot driver-assist system was linked to at least 14 fatalities, several dozen injuries, and 467 crashes, prompting NHTSA to investigate the adequacy of Tesla's December 2023 recall of 2 million vehicles.
The National Highway Traffic Safety Administration (NHTSA) opened an investigation into Tesla's Autopilot driver-assist system after identifying 467 crashes involving the technology, including at least 14 fatalities and several dozen injuries. The investigation followed Tesla's December 2023 recall of over 2 million vehicles equipped with Autopilot, covering Model Y, X, S, 3 and Cybertruck vehicles made since 2012. NHTSA found evidence that Tesla's weak driver engagement system was not appropriate for Autopilot's permissive operating capabilities, creating a critical safety gap between drivers' expectations and the system's actual capabilities. The agency determined that many crashes were avoidable and involved hazards that would have been visible to an attentive driver. Tesla's recall remedy was deployed through a software update that added safeguards to prevent driver misuse, including increased prominence of visual alerts and restrictions on Autopilot use if improper usage is detected. However, NHTSA opened a new investigation into the adequacy of this recall after identifying crashes that occurred even after the software update was installed. The agency criticized Tesla's approach as an industry outlier, noting that the Autopilot name elicits the idea of drivers not being in control, while other manufacturers use terms like assist or team to imply active supervision is required.
Domain classification, causal taxonomy, severity scores, and national security assessments were LLM-classified and may contain errors.
AI systems that fail to perform reliably or effectively under varying conditions, exposing them to errors and failures that can have significant consequences, especially in critical applications or areas that require moral reasoning.
AI system
Due to a decision or action made by an AI system
Unintentional
Due to an unexpected outcome from pursuing a goal
Post-deployment
Occurring after the AI model has been trained and deployed