ShotSpotter's gunshot detection AI system was used as evidence in criminal trials but demonstrated unreliable performance when analysts repeatedly changed their assessment of gunshot counts in the Silvon Simmons case, leading to a judge overturning his conviction due to the system's lack of reliability.
ShotSpotter is an AI-powered gunshot detection system used by Rochester police since 2006, costing $130,000 annually. The system uses acoustic sensors placed throughout high-crime areas to detect and locate gunfire through algorithmic analysis of audio impulses. In the April 2016 case of Silvon Simmons, who was shot by police officer Joseph Ferrigno, the ShotSpotter system initially failed to detect the gunshots, categorizing them as helicopter sounds. After police requested a review, ShotSpotter analysts changed their assessment multiple times - first identifying three shots, then four, then five shots after prosecution requests. The audio evidence became crucial in Simmons' trial, where he was acquitted of attempted murder but convicted of weapons possession. However, Judge Christopher Ciaccio later overturned the conviction in January 2018, ruling that ShotSpotter evidence was not reliable enough for scientific evidence in criminal trials. The Innocence Project filed a brief challenging the system's reliability, highlighting the subjective nature of human analyst reviews and potential for cognitive bias. Critics argue the system was designed as an investigative tool for police alerts, not as primary evidence in trials, and lacks sufficient peer review testing to verify its claimed 80% accuracy within 25 meters.
Domain classification, causal taxonomy, severity scores, and national security assessments were LLM-classified and may contain errors.
AI systems that fail to perform reliably or effectively under varying conditions, exposing them to errors and failures that can have significant consequences, especially in critical applications or areas that require moral reasoning.
AI system
Due to a decision or action made by an AI system
Unintentional
Due to an unexpected outcome from pursuing a goal
Post-deployment
Occurring after the AI model has been trained and deployed