An AI tool used by ICE to categorize job applicants incorrectly flagged approximately 200 people without law enforcement experience as qualified officers, sending them to field offices with inadequate training instead of the required full academy course.
Immigration and Customs Enforcement deployed an AI tool to process applications for 10,000 new officers during a recruitment surge. The AI was designed to identify applicants with law enforcement experience for the LEO program, which requires only four weeks of online training, versus eight weeks of in-person training at the Federal Law Enforcement Training Center for those without experience. The AI tool flagged anyone with the word 'officer' on their resumes, including compliance officers and aspiring ICE officers, incorrectly categorizing them as experienced law enforcement. This error affected approximately 200 hires who were initially placed in field offices without proper training. The mistake was identified in mid-fall, over a month into the recruitment surge, and ICE implemented manual resume reviews to remedy the situation. A DHS spokesperson called it a 'technological snag' and stated that affected candidates were sent to the training center for full training, with no one placed on enforcement duties without appropriate credentials.
Domain classification, causal taxonomy, severity scores, and national security assessments were LLM-classified and may contain errors.
AI systems that fail to perform reliably or effectively under varying conditions, exposing them to errors and failures that can have significant consequences, especially in critical applications or areas that require moral reasoning.
AI system
Due to a decision or action made by an AI system
Unintentional
Due to an unexpected outcome from pursuing a goal
Post-deployment
Occurring after the AI model has been trained and deployed