A Guardian Australia investigation found that the Australian government's voiceprint authentication system used by Centrelink and the ATO can be fooled by AI-generated voice clones created from just four minutes of audio.
The Australian government operates a voiceprint authentication system used by Centrelink and the Australian Taxation Office (ATO) that allows people to verify their identity over the phone using voice biometrics combined with customer reference numbers. As of February, 3.8 million Centrelink clients used the voiceprint service, and over 7.1 million people had verified their voice with the ATO. Guardian Australia conducted an investigation following reports of AI voice cloning attacks on overseas banking services. A Guardian journalist successfully created an AI clone of their own voice using just four minutes of audio and used this clone, combined with their customer reference number, to gain access to their own Centrelink self-service account. The voiceprint system is described by Services Australia as 'secure, accurate and reliable' and claims it can detect when someone is using a recording. However, the investigation demonstrated the vulnerability to AI-generated voice synthesis. The self-service phone system allows access to sensitive information including benefit payments and can authorize mailing of replacement cards. The technology is provided by Nuance, which claims its latest systems can detect synthetic voices in 86% to 99% of cases. When contacted about the vulnerability, Services Australia declined to say if the technology would be changed or removed.
Domain classification, causal taxonomy, severity scores, and national security assessments were LLM-classified and may contain errors.
Vulnerabilities that can be exploited in AI systems, software development toolchains, and hardware, resulting in unauthorized access, data and privacy breaches, or system manipulation causing unsafe outputs or behavior.
AI system
Due to a decision or action made by an AI system
Unintentional
Due to an unexpected outcome from pursuing a goal
Post-deployment
Occurring after the AI model has been trained and deployed