An AI-powered app called Speedcam Anywhere that estimates vehicle speeds from smartphone footage faced rejection from app stores and prompted abusive responses from drivers, forcing its developers into anonymity.
In March 2024, a team of AI scientists with backgrounds in Silicon Valley companies and top UK universities launched Speedcam Anywhere, an app that uses AI to estimate the speed of passing cars from smartphone footage. The app works by filming a passing car, using the number plate to search the DVLA's public registration database to find the car's make and model, determining the distance between axles, and comparing this with footage to calculate speed. Users can then save the video or generate reports to share with authorities. However, Google refused to allow the app on the Play Store, claiming it was not possible to estimate vehicle speed using AI alone, though the developers successfully demonstrated the technology. Apple has also delayed approval for the iOS version without providing a reason. The app cannot lead to speeding tickets as its algorithm has not been vetted by the Home Office, making it legally insufficient evidence for prosecution, though dangerous driving charges may apply. The developers have received abusive emails and death threats, forcing them to remain anonymous. One review compared the app to East German surveillance by the Stasi. The founders hope the app will alert police to speeding hotspots and encourage action against dangerous driving.
Domain classification, causal taxonomy, severity scores, and national security assessments were LLM-classified and may contain errors.
Users anthropomorphizing, trusting, or relying on AI systems, leading to emotional or material dependence and inappropriate relationships with or expectations of AI systems. Trust can be exploited by malicious actors (e.g., to harvest personal information or enable manipulation), or result in harm from inappropriate use of AI in critical situations (e.g., medical emergency). Overreliance on AI systems can compromise autonomy and weaken social ties.
Human
Due to a decision or action made by humans
Unintentional
Due to an unexpected outcome from pursuing a goal
Post-deployment
Occurring after the AI model has been trained and deployed