Eric Horvitz's Tesla Autopilot failed to center properly on a curving road, causing both driver's side tires to hit a raised yellow curb, shredding the tires and damaging the rear suspension.
In the summer of 2017, Eric Horvitz, Microsoft's director of artificial intelligence research, was using Tesla's Autopilot function while driving on a curving road near Microsoft's campus in Redmond, Washington. During the drive, he was taking a call about AI ethics and governance. The Tesla's AI system failed to center the vehicle properly on the road, causing both tires on the driver's side to hit a raised yellow curb marking the center line. The impact shredded both tires and damaged the vehicle's rear suspension. Horvitz had to grab the wheel to regain control and pull the car back into the lane. He was physically unharmed, but the vehicle had to be towed away on a truck. When Horvitz called Tesla to report the incident, he found the company was more focused on establishing liability limits than collecting safety data, noting that since he was driving slower than 45 mph, the incident was considered his responsibility according to Tesla's usage guidelines.
Domain classification, causal taxonomy, severity scores, and national security assessments were LLM-classified and may contain errors.
AI systems that fail to perform reliably or effectively under varying conditions, exposing them to errors and failures that can have significant consequences, especially in critical applications or areas that require moral reasoning.
AI system
Due to a decision or action made by an AI system
Unintentional
Due to an unexpected outcome from pursuing a goal
Post-deployment
Occurring after the AI model has been trained and deployed