Clearview AI developed facial recognition software using over 3 billion scraped images from social media and websites, which was deployed to over 600 law enforcement agencies without public scrutiny, raising privacy concerns and facing legal challenges for violating data protection laws.
Clearview AI, founded in 2017 by Hoan Ton-That and Richard Schwartz, developed facial recognition software that scraped over 3 billion images from Facebook, YouTube, Twitter, Venmo and other websites without permission. The system allows users to upload a photo and receive matches with links to where those images appeared online. By 2019, over 600 law enforcement agencies were using the tool, including the FBI and Department of Homeland Security. The company violated terms of service of major platforms and faced cease-and-desist letters from Google, Twitter, Facebook and Venmo. Multiple lawsuits were filed, including by the ACLU in Illinois and activist groups in California, alleging violations of biometric privacy laws. European regulators declared the service illegal, with France imposing a 20 million euro fine in 2021. The system was criticized for accuracy concerns, lack of independent testing, and potential for misuse in surveillance of activists and immigrants. Critics argued it effectively ended public anonymity and created mass surveillance capabilities.
Domain classification, causal taxonomy, severity scores, and national security assessments were LLM-classified and may contain errors.
AI systems that memorize and leak sensitive personal data or infer private information about individuals without their consent. Unexpected or unauthorized sharing of data and information can compromise user expectation of privacy, assist identity theft, or cause loss of confidential intellectual property.
AI system
Due to a decision or action made by an AI system
Intentional
Due to an expected outcome from pursuing a goal
Post-deployment
Occurring after the AI model has been trained and deployed