The Pasco County Sheriff's Office deployed an AI-powered intelligence-led policing system that identified residents likely to commit future crimes based on criminal histories and other data, leading to systematic harassment, arrests, and civil rights violations affecting nearly 1,000 people including minors.
The Pasco County Sheriff's Office in Florida implemented an intelligence-led policing program starting in 2011 under Sheriff Chris Nocco that uses AI algorithms to identify 'prolific offenders' likely to commit future crimes. The system scores individuals based on arrest histories, police reports, and other intelligence, creating target lists of approximately 100 people at a time. Over five years, nearly 1,000 people were ensnared by the program, with at least 10% being minors. Deputies conducted over 12,500 visits to targets' homes, often without warrants or probable cause, sometimes surrounding homes with multiple patrol cars in the middle of the night. The program also incorporated a separate initiative using school district data and child welfare records to identify 420 children as potential future criminals based on factors like grades, attendance, and abuse histories. Targets and their families reported systematic harassment including repeated home visits, code enforcement citations for minor violations like overgrown grass, and arrests of family members. Several lawsuits were filed alleging constitutional violations, with plaintiffs describing tactics designed to 'make their lives miserable until they move or sue.' The program expanded over time despite criticism from civil rights groups and policing experts who called the practices 'morally repugnant' and comparable to 'child abuse.'
Domain classification, causal taxonomy, severity scores, and national security assessments were LLM-classified and may contain errors.
AI systems acting in conflict with human goals or values, especially the goals of designers or users, or ethical standards. These misaligned behaviors may be introduced by humans during design and development, such as through reward hacking and goal misgeneralisation, or may result from AI using dangerous capabilities such as manipulation, deception, situational awareness to seek power, self-proliferate, or achieve other goals.
AI system
Due to a decision or action made by an AI system
Intentional
Due to an expected outcome from pursuing a goal
Post-deployment
Occurring after the AI model has been trained and deployed