A Tesla Model S driver was found asleep and intoxicated behind the wheel while the vehicle continued driving at 70 mph on Highway 101, requiring a 7-minute police operation to safely stop the car using its Autopilot system's traffic-following capabilities.
On November 30, 2018, at approximately 3:30 AM, California Highway Patrol officers spotted a gray Tesla Model S traveling southbound at 70 mph on Highway 101 near Redwood City with the driver appearing to be asleep at the wheel. The driver was identified as 45-year-old Alexander Samek, a Los Altos planning commissioner. Officers attempted to pull the vehicle over using lights and sirens but the driver remained unresponsive. Believing the Tesla's Autopilot semi-autonomous driving feature was engaged, officers executed a coordinated maneuver where one patrol car created a traffic break behind the Tesla while another positioned itself in front and gradually slowed down. The Tesla's sensors detected the slower vehicle ahead and automatically reduced speed accordingly. The operation took approximately 7 minutes and covered 7 miles before the vehicle came to a complete stop. Officers then woke Samek by knocking on the windows and giving verbal commands. He was subsequently arrested for driving under the influence after failing a field sobriety test. Tesla declined to comment on whether Autopilot was actually engaged, but the incident raised questions about how the system's safety features could be circumvented, as Autopilot is designed to alert drivers and eventually stop the vehicle if hands are not detected on the steering wheel.
Domain classification, causal taxonomy, severity scores, and national security assessments were LLM-classified and may contain errors.
Users anthropomorphizing, trusting, or relying on AI systems, leading to emotional or material dependence and inappropriate relationships with or expectations of AI systems. Trust can be exploited by malicious actors (e.g., to harvest personal information or enable manipulation), or result in harm from inappropriate use of AI in critical situations (e.g., medical emergency). Overreliance on AI systems can compromise autonomy and weaken social ties.
Human
Due to a decision or action made by humans
Unintentional
Due to an unexpected outcome from pursuing a goal
Post-deployment
Occurring after the AI model has been trained and deployed