Tesla recalled 362,758 vehicles equipped with Full Self-Driving Beta software after NHTSA found the system created crash risks by violating traffic laws at intersections, failing to stop completely at stop signs, and not properly responding to speed limits.
In February 2023, Tesla issued a recall for 362,758 vehicles equipped with its Full Self-Driving (FSD) Beta software after the National Highway Traffic Safety Administration (NHTSA) identified safety violations. The recall affected 2016-2023 Model S and Model X vehicles, 2017-2023 Model 3, and 2020-2023 Model Y vehicles with FSD Beta installed or pending installation. NHTSA found that the FSD system could act unsafely around intersections by traveling straight through turn-only lanes, entering stop sign-controlled intersections without complete stops, proceeding through yellow traffic signals without proper caution, and inadequately responding to speed limit changes. The agency determined these behaviors led to 'an unreasonable risk to motor vehicle safety based on insufficient adherence to traffic safety laws.' Tesla identified 18 warranty claims between May 2019 and September 2022 potentially related to these conditions but reported no awareness of injuries or deaths. The company disagreed with NHTSA's analysis but agreed to a voluntary recall 'out of an abundance of caution.' Tesla planned to address the issues through a free over-the-air software update deployed within weeks. The FSD feature, which costs $15,000 upfront or $199 monthly, had been deployed to approximately 400,000 customers in North America. This recall was part of ongoing NHTSA investigations into Tesla's driver assistance systems, including separate probes into Autopilot-related crashes with emergency vehicles.
Domain classification, causal taxonomy, severity scores, and national security assessments were LLM-classified and may contain errors.
AI systems that fail to perform reliably or effectively under varying conditions, exposing them to errors and failures that can have significant consequences, especially in critical applications or areas that require moral reasoning.
AI system
Due to a decision or action made by an AI system
Unintentional
Due to an unexpected outcome from pursuing a goal
Post-deployment
Occurring after the AI model has been trained and deployed