Experts warn of morphing threat to voice biometrics
28 September 2015 15:09 GMT

Researchers from a US university say that voice imitation attacks using samples could increasingly be used to breach automated and human authentication systems.

Voice morphing software will enable hackers to launch these attacks, found a team at the University of Alabama, Birmingham (UAB).

In research presented at the European Symposium on Research in Computer Security (ESORICS) in Vienna, Austria, the UAB team warmed that people could inadvertently leave voice samples as part of daily life.

“Because people rely on the use of their voices all the time, it becomes a comfortable practice,” said Nitesh Saxena, Ph.D., director of the Security and Privacy In Emerging computing and networking Systems (SPIES) lab, and associate professor of computer and information sciences at UAB.

“What they may not realize is that level of comfort lends itself to making the voice a vulnerable commodity. People often leave traces of their voices in many different scenarios. They may talk out loud while socializing in restaurants, giving public presentations or making phone calls.”

The research team looked at the implications stealing voices had on human communications as its other application for the paper’s case study. The voice-morphing tool imitated two famous celebrities, Oprah Winfrey and Morgan Freeman, in a controlled study environment.

The UAB study – a collaborative project involving UAB College of Arts and Sciences Department of Computer and Information Sciences, and the Center for Information Assurance and Joint Forensics Research – took audio samples and demonstrates how they can be used to compromise a victim’s security and privacy.

Once the attacker defeats voice biometrics using fake voices, he could gain unfettered access to the system, which may be a device or a service, employing the authentication functionality.

“As a result, just a few minutes’ worth of audio in a victim’s voice would lead to the cloning of the victim’s voice itself […] The consequences of such a clone can be grave. Because voice is a characteristic unique to each person, it forms the basis of the authentication of the person, giving the attacker the keys to that person’s privacy,” said Saxena.

Results showed that a majority of advanced voice-verification algorithms were trumped by the researchers’ attacks, with only a 10-20% rate of rejection. On average it was also found that humans tasked with verifying voice samples only rejected about half of the morphed clips.

Related articles

Eastern Bank deploys voice biometrics
18/06/15
Australian trade union slams voice biometrics plan
18/03/15
Authentify brings voice biometrics to mobile app
11/03/15