FutureFive New Zealand - Consumer technology news & reviews from the future
Story image
Research trials new way to stop voice spoofers in their tracks
Tue, 23rd Jun 2020
FYI, this story is more than a year old

Voice assistants like Amazon Alexa, Siri, and Google Assistant provide plenty of life's conveniences like calendars, smart home controls, and making phone calls. To do this, they generally rely on voice commands and activation and assume that the person making the command is authorised to do so.

But there aren't many ways to prove that the person making the demand is who they say they are, especially when some cyber attackers are coming up with new ways to impersonate a person's voice.

Attackers can record a voice and replay it to a voice assistant, with the aim of faking authentication. They can also stitch samples together to create a more accurate spoof.

Now, researchers at Australian firm CSIRO's Data61 lab have come up with a way of detecting when someone is using a fake or ‘replayed' voice to gain access to those voice assistants.

Data61 cybersecurity research scientist Muhammat Ejas Ahmed explains that technologies that preserve privacy are becoming more important as voice tech works itself into our daily lives.

As many as 275 million voice assistant devices may be in homes by 2023, according to Juniper Research – so it's important to get privacy basics right.

“Voice spoofing attacks can be used to make purchases using a victim's credit card details, control Internet of Things connected devices like smart appliances and give hackers unsolicited access to personal consumer data such as financial information, home addresses and more,” says Ahmed.

Data61's detection technology is called Void (Voice liveness detection) and can be embedded into a voice assistant or smartphone. It is able to detect a person's real voice and when a voice is simply being played through a recording device.

Or in Data61's words, Void can tell “the differences in spectral power between a live human voice and a voice replayed through a speaker, in order to detect when hackers are attempting to spoof a system”.

“Although voice spoofing is known as one of the easiest attacks to perform as it simply involves a recording of the victim's voice, it is incredibly difficult to detect because the recorded voice has similar characteristics to the victim's live voice. Void is game-changing technology that allows for more efficient and accurate detection helping to prevent people's voice commands from being misused”.

While similar detection technologies use deep learning models, Void uses a spectrogram as a visual representation of the signal as it varies with time. This, Data61 says, can detect the ‘liveness' of a voice.

So far Data61 has trialled Void with Samsung and Sungkyunkwan University in South Korea.  The Samsung and Automatic Speaker Verification Spoofing and Countermeasures challenges achieved an accuracy of 99%and 94% for each dataset.

The trials involved 255,173 voice samples generated with 120 participants, 15 playback devices and 12 recording devices and 18,030 publicly available voice samples generated with 42 participants, 26 playback devices and 25 recording devices.

Data61 senior research scientist Adnene Guabtni offers a few tips for people with voice assistants:

  • “Always change your voice assistant settings to only activate the assistant using a physical action, such as pressing a button. 
  • On mobile devices, make sure the voice assistant can only activate when the device is unlocked. 
  • Turn off all home voice assistants before you leave your house, to reduce the risk of successful voice spoofing while you are out of the house. 
  • Voice spoofing requires hackers to get samples of your voice. Make sure you regularly delete any voice data that Google, Apple or Amazon store. 
  • Try to limit the use of voice assistants to commands that do not involve online purchases or authorisations – hackers or people around you might record you issuing payment commands and replay them at a later stage.”