ShotSpotter gunshot detection technology deployed across multiple US cities showed significant accuracy problems, with false positive rates much higher than advertised and failure to detect actual shootings, leading to wasted police resources and potential civil rights violations.
ShotSpotter, a gunshot detection system developed by SST (now SoundThinking), uses acoustic sensors and AI algorithms to identify and locate gunshots for police departments. The system was deployed in over 130 US cities including Chicago, San Francisco, San Antonio, Troy, Fall River, and others. Multiple investigations and audits revealed significant performance issues: Chicago's Inspector General found that 89% of ShotSpotter alerts resulted in no gun-related crime evidence, while San Diego police made only 2 arrests from 584 alerts over four years. The system's advertised 97% accuracy rate was contradicted by real-world performance data. In Chicago, analyst Scott DeDore tracked the system's accuracy from 2017-2018 and found it correctly detected gunshots in only 63 of 135 shooting incidents (47% accuracy). Several cities including Troy, San Antonio, and Fall River discontinued the service due to poor performance and high costs. The technology costs approximately $65,000-90,000 per square mile annually. Critics raised concerns about the system being deployed primarily in communities of color, creating additional police deployments that could lead to unnecessary confrontations. The system also raised privacy concerns as microphones record all sounds, not just gunshots. Human analysts at ShotSpotter can modify determinations, with changes occurring 10% of the time according to company documents.
Domain classification, causal taxonomy, severity scores, and national security assessments were LLM-classified and may contain errors.
AI systems that fail to perform reliably or effectively under varying conditions, exposing them to errors and failures that can have significant consequences, especially in critical applications or areas that require moral reasoning.
AI system
Due to a decision or action made by an AI system
Unintentional
Due to an unexpected outcome from pursuing a goal
Post-deployment
Occurring after the AI model has been trained and deployed