In 1983, Soviet Lt. Colonel Stanislav Petrov prevented potential nuclear war by correctly identifying a false alarm from the Oko early warning system that showed five incoming US missiles, choosing to report it as a computer malfunction rather than an attack.
On September 26, 1983, Soviet Lt. Colonel Stanislav Petrov was the duty officer at the Serpukhov-15 early warning facility near Moscow, monitoring the Oko satellite system designed to detect incoming nuclear missiles from the United States. At approximately midnight Moscow time, the system detected what appeared to be five US Minuteman intercontinental ballistic missiles launched toward the Soviet Union, with the computer showing 'highest reliability' and '100% probability of attack.' According to Soviet protocol, Petrov should have immediately reported this to his superiors, who would have consulted with Soviet leadership about launching a nuclear counterattack. However, Petrov suspected the detection was a false alarm based on his distrust of the new system, the small number of missiles detected (rather than the expected massive first strike), and the lack of corroborating evidence from ground radar. He reported the incident as a computer malfunction rather than an attack. Investigation later confirmed that the Oko system had mistakenly identified sunlight reflecting off high-altitude clouds over North Dakota as missile launches. This incident occurred during heightened Cold War tensions, just three weeks after the Soviet Union shot down Korean Air Lines Flight 007, killing 269 people including a US congressman. Petrov's decision potentially prevented nuclear retaliation that could have escalated to full-scale nuclear war between the superpowers.
Domain classification, causal taxonomy, severity scores, and national security assessments were LLM-classified and may contain errors.
AI systems that fail to perform reliably or effectively under varying conditions, exposing them to errors and failures that can have significant consequences, especially in critical applications or areas that require moral reasoning.
AI system
Due to a decision or action made by an AI system
Unintentional
Due to an unexpected outcome from pursuing a goal
Post-deployment
Occurring after the AI model has been trained and deployed