The Department of Homeland Security uses an Amazon-hosted AI system called ATLAS that analyzes millions of records and can automatically flag naturalized Americans for citizenship revocation based on secret algorithmic criteria.
The U.S. Department of Homeland Security operates an AI system called ATLAS (part of USCIS's Fraud Detection and National Security Data System) that runs on Amazon Web Services servers and analyzes millions of immigration records against federal databases. The system uses pattern-based algorithms and biometric data like fingerprints to identify potential fraud, public safety, and national security concerns among immigrants and naturalized citizens. In 2019 alone, ATLAS conducted 16.9 million screenings and generated 120,000 red flags. The system can flag individuals based on their known associates, and in 'exceptional instances' may consider race and ethnicity. When ATLAS makes a negative determination, it sends automated notifications that can lead to denaturalization referrals to ICE in as few as four steps. Documents show that as of April 2020, USCIS had filed paperwork related to denaturalization in 2,628 cases, with 745 pending and 502 referred to the DOJ. The system's decision-making criteria remain secret, making it nearly impossible to contest algorithmic decisions. Critics argue the system amplifies bureaucratic mistakes and disproportionately targets certain communities, with the ultimate goal being deportation of naturalized citizens.
Domain classification, causal taxonomy, severity scores, and national security assessments were LLM-classified and may contain errors.
Unequal treatment of individuals or groups by AI, often based on race, gender, or other sensitive characteristics, resulting in unfair outcomes and unfair representation of those groups.
AI system
Due to a decision or action made by an AI system
Intentional
Due to an expected outcome from pursuing a goal
Post-deployment
Occurring after the AI model has been trained and deployed