The NYPD's deployment of facial recognition technology disproportionately targets communities already subject to discriminatory stop-and-frisk practices, with higher concentrations of surveillance cameras in areas with more non-white residents.
Amnesty International's analysis revealed that the New York Police Department's facial recognition technology deployment creates discriminatory surveillance patterns across New York City. The research, based on crowdsourced data mapping over 25,500 CCTV cameras, found that areas with higher stop-and-frisk rates also have greater exposure to facial recognition compatible cameras. In the Bronx, Brooklyn and Queens, neighborhoods with higher proportions of non-white residents showed higher concentrations of facial recognition compatible CCTV cameras. The NYPD used facial recognition technology in at least 22,000 cases between 2016 and 2019. Data shows Black and Latinx communities have been the overwhelming target of stop-and-frisk tactics since 2002. The analysis found that protesters during Black Lives Matter demonstrations in mid-2020 experienced nearly total surveillance coverage, with routes to Washington Square Park monitored entirely by NYPD Argus cameras. Amnesty International has sued the NYPD for refusing to disclose public records regarding its acquisition of facial recognition technology and other surveillance tools.
Domain classification, causal taxonomy, severity scores, and national security assessments were LLM-classified and may contain errors.
Unequal treatment of individuals or groups by AI, often based on race, gender, or other sensitive characteristics, resulting in unfair outcomes and unfair representation of those groups.
Human
Due to a decision or action made by humans
Intentional
Due to an expected outcome from pursuing a goal
Post-deployment
Occurring after the AI model has been trained and deployed