Sound Intelligence's AI-powered aggression detection system, deployed in hundreds of schools worldwide, frequently produces false positives by misidentifying normal student activities like laughing, coughing, and cheering as aggressive behavior while failing to detect actual screaming.
Sound Intelligence, a Dutch company, developed an AI-powered aggression detection system that is deployed in hundreds of schools, healthcare facilities, banks, stores and prisons worldwide, including more than 100 in the U.S. California-based Louroe Electronics has loaded this software on its microphones since 2015, selling devices for about $1,000 each to customers including Pinecrest Academy Horizon in Nevada and Rock Hill Schools in South Carolina. ProPublica's testing revealed significant performance issues with the system. When testing with high school students at Frank Sinatra School of the Arts in Queens and Staples Pathways Academy in Connecticut, the system frequently triggered false alarms for normal activities like cheering when pizzas were delivered, students shouting answers during Pictionary games, laughter, and coughing fits. Of 55 instances where students screamed on cue, only 22 triggered the detector. The system appears to correlate aggression with rough, strained noises in relatively high pitch, causing it to misidentify a 1994 YouTube clip of comedian Gilbert Gottfried as aggressive behavior. The Valley Hospital in New Jersey phased out the detector after a three-year, $22,000 pilot program due to poor performance, including failing to detect an agitated man screaming and pounding on a desk after sensitivity was reduced to minimize false alarms from patients' voices and cafeteria noise.
Domain classification, causal taxonomy, severity scores, and national security assessments were LLM-classified and may contain errors.
AI systems that fail to perform reliably or effectively under varying conditions, exposing them to errors and failures that can have significant consequences, especially in critical applications or areas that require moral reasoning.
AI system
Due to a decision or action made by an AI system
Unintentional
Due to an unexpected outcome from pursuing a goal
Post-deployment
Occurring after the AI model has been trained and deployed