Access this content
Your content has been opened.
VoiceVantage™ VoiceCheck™ Verification Capability Analysis using AI Deep Fake Voice Applications has been emailed to . Entered the wrong email?
Don't see the content in your inbox?
Make sure to check your spam and other messages folders.
Can't get to your email right now?
Please enter a valid verification code.
Code sent to:
Register to access this content
By accessing content on the Security Today Online Buyers Guide you agree to our Terms of Service and Privacy Policy; and, you acknowledge that your information may be shared with the content publisher.
Voice-based biometric authentication systems such as VoiceVantage™ VoiceCheck™ may be increasingly vulnerable to AI-generated synthetic voice cloning, commonly known as deep fake technology. To evaluate the resilience of VoiceCheck™ against five popular deep fake applications, FakeYou, PlayHT, Descript, ResembleAI, and ElevenLabs, a series of controlled tests were carried out across two phases. Both internal and external attack scenarios were considered. It was found that only deep fake attacks on VoiceCheck™ voiceprints created using 8kHz landline veriphrase recordings are minimally vulnerable to deep fake attacks, and only if VoiceCheck™ validation guidelines are not followed. All higher quality voiceprints remained secure within the VoiceCheck™ recommended configurations. The alternative of setting a somewhat higher pass thresholds as opposed to referring a low score to a Help Desk for validation was considered. It proved to be a viable alternative.