Law enforcement agencies across the United States used an AI tool called Cybercheck that claimed to geolocate suspects with over 90% accuracy, but investigations revealed the system produced unreliable evidence, leading to wrongful prosecutions and challenges to its methodology in murder cases.
Cybercheck is an AI-powered investigative tool developed by Canadian company Global Intelligence, founded by Adam Mosher, that claims to use over 700 algorithms to analyze open source data and geolocate individuals in real time or at specific times in the past. The system has been used by approximately 345 law enforcement agencies across 40 states in nearly 24,000 searches since 2017, with contracts ranging from $11,000 to $35,000. The tool creates 'cyber profiles' by amalgamating names, aliases, emails, phone numbers, IP addresses, and other online identifiers to create a person's unique digital fingerprint. However, investigations by WIRED and legal challenges revealed significant problems with the system's accuracy and methodology. In multiple cases, including murder prosecutions in Ohio and Texas, defense attorneys found that Cybercheck produced contradictory reports, claimed impossible accuracy rates, and provided evidence that could not be verified through traditional means. The system generated identical reports for different dates in the Phillip Mendoza murder case, both claiming 93.13% accuracy. Law enforcement agencies reported that Cybercheck information often could not be substantiated, with some describing results as 'completely false.' The system does not retain supporting evidence for its findings and has never been peer-reviewed despite claims to the contrary. Several prosecutors have withdrawn Cybercheck evidence from trials, and multiple law enforcement agencies have discontinued using the service due to unreliable results.
Domain classification, causal taxonomy, severity scores, and national security assessments were LLM-classified and may contain errors.
AI systems that fail to perform reliably or effectively under varying conditions, exposing them to errors and failures that can have significant consequences, especially in critical applications or areas that require moral reasoning.
AI system
Due to a decision or action made by an AI system
Unintentional
Due to an unexpected outcome from pursuing a goal
Post-deployment
Occurring after the AI model has been trained and deployed