Security researchers demonstrated that Apple's Face ID facial recognition system on the iPhone X could be bypassed using various masks costing $150-200, raising concerns about the security of biometric authentication systems.
In November 2017, shortly after Apple released the iPhone X with its new Face ID facial recognition system, multiple security research teams attempted to bypass the technology. Vietnamese security firm Bkav successfully demonstrated that Face ID could be fooled using a composite mask made of 3D-printed plastic, silicone, makeup, and paper cutouts costing approximately $150. The researchers created masks using 3D scanning technology and infrared images, with the eyes represented by 2D printed images and a hand-sculpted silicone nose. Apple had claimed Face ID was highly secure with a one-in-a-million chance of false authentication and had tested the system against Hollywood-quality masks during development. However, Bkav's relatively simple mask was able to unlock iPhone X devices, though the process required detailed facial scanning and precise positioning. Other research teams, including WIRED magazine, attempted similar attacks with more expensive Hollywood-quality masks but failed to bypass Face ID. The vulnerability also extended to family members, with reports of children being able to unlock their parents' devices due to facial similarities. The researchers emphasized that while the attack was technically feasible, it would primarily pose risks to high-profile individuals rather than average consumers due to the time, effort, and access required to create effective spoofing masks.
Domain classification, causal taxonomy, severity scores, and national security assessments were LLM-classified and may contain errors.
Vulnerabilities that can be exploited in AI systems, software development toolchains, and hardware, resulting in unauthorized access, data and privacy breaches, or system manipulation causing unsafe outputs or behavior.
AI system
Due to a decision or action made by an AI system
Unintentional
Due to an unexpected outcome from pursuing a goal
Post-deployment
Occurring after the AI model has been trained and deployed
No population impact data reported.