On March 22, 2003, a U.S. Patriot missile system's AI-powered target classification system misidentified a British RAF Tornado fighter jet as an Iraqi anti-radiation missile and automatically engaged it, killing both crew members instantly.
On March 22, 2003, during the U.S.-led invasion of Iraq, a Patriot missile system operated by American troops fired an interceptor missile at a UK Royal Air Force Tornado GR4 fighter jet, killing Flight Lieutenant Kevin Main and Flight Lieutenant David Williams instantly. The Patriot system, manufactured by Raytheon, uses phased array radar and computer control stations with automated operations and 'man-in-the-loop override' capabilities to detect and engage targets. The system's AI-powered classification algorithms mistakenly identified the returning Tornado as an Iraqi anti-radiation missile based on its flight profile during descent. The RAF Board of Inquiry concluded that multiple factors contributed: the Patriot's broad target classification criteria programmed for worldwide threats rather than Iraq-specific threats, autonomous operation mode, failure of the aircraft's identification-friend-or-foe (IFF) system, and lack of communications equipment that prevented access to wider airspace situational awareness. The incident occurred in an environment where U.S. forces faced a 4,000-to-1 friendly-to-enemy aircraft ratio, making target discrimination extremely challenging. This was one of multiple friendly fire incidents involving Patriot systems during the 2003 Iraq invasion, including the downing of a U.S. Navy F/A-18 that killed pilot Lieutenant Nathan White.
Domain classification, causal taxonomy, severity scores, and national security assessments were LLM-classified and may contain errors.
AI systems that fail to perform reliably or effectively under varying conditions, exposing them to errors and failures that can have significant consequences, especially in critical applications or areas that require moral reasoning.
AI system
Due to a decision or action made by an AI system
Unintentional
Due to an unexpected outcome from pursuing a goal
Post-deployment
Occurring after the AI model has been trained and deployed