AI-powered weapons scanners deployed in NYC subway stations during a 30-day pilot program failed to detect any firearms but generated 118 false positives out of 2,749 scans, while the manufacturer Evolv faced FTC settlement for misleading marketing claims about the technology's capabilities.
The New York Police Department conducted a 30-day pilot program testing AI-powered weapons scanners manufactured by Evolv at 20 subway stations, performing 2,749 scans in total. The scanners failed to detect any passengers with firearms but generated 118 false positives, representing a 4.29% false alarm rate. The scanners did detect 12 knives, though police declined to specify whether these were illegal weapons or permitted tools like pocket knives. Mayor Eric Adams had announced the pilot program as part of efforts to deter subway violence, despite Evolv CEO Peter George previously stating that subways were not a good use case for their technology due to interference from railways. Concurrently, the Federal Trade Commission entered a settlement with Evolv resolving claims that the company made misleading marketing representations about their technology's capabilities. The FTC alleged that Evolv's Express scanners were fundamentally metal detectors rather than sophisticated weapons detection systems, and that the company overstated their accuracy and effectiveness. Evolv also faced a class-action lawsuit from investors alleging the company overstated device capabilities. The Legal Aid Society characterized the pilot program results as 'objectively a failure' and called for the program to be permanently shelved.
Domain classification, causal taxonomy, severity scores, and national security assessments were LLM-classified and may contain errors.
AI systems that fail to perform reliably or effectively under varying conditions, exposing them to errors and failures that can have significant consequences, especially in critical applications or areas that require moral reasoning.
AI system
Due to a decision or action made by an AI system
Unintentional
Due to an unexpected outcome from pursuing a goal
Post-deployment
Occurring after the AI model has been trained and deployed