A trio of researchers from the University of Texas at San Antonio and University of Colorado Colorado Springs have come up with a way to send voice assistants on smartphones and smart speakers malicious commands — without the user's knowledge: the Near-Ultrasound Invisible Trojan, or NUIT.
"If you play YouTube on your smart TV, that smart TV has a speaker, right? The sound of NUIT malicious commands will [be] inaudible, and it can attack your cell phone too and communicate with your Google Assistant or Alexa devices," claims Guinevere Chen, associate professor and co-author of the NUIT paper. "It can even happen in Zooms during meetings. If someone unmutes themselves, they can embed the attack signal to hack your phone that’s placed next to your computer during the meeting."
The attack works by using a speaker, either the one already built in to the target device or something nearby, to play audio which is close to, but not quite at, ultrasonic frequencies — so they can still be reproduced by off-the-shelf hardware. If the initial malicious command is to silence the device's responses, subsequent actions — like unlocking a door or disarming an alarm system — can be triggered without notification.
"This is not only a software issue or malware. It’s a hardware attack that uses the internet. The vulnerability is the non-linearity of the microphone design, which the manufacturer would need to address," says Chen said. "Out of the 17 smart devices we tested, [only] Apple Siri devices need to steal the user's voice while other voice assistant devices can get activated by using any voice or a robot voice."
This isn't the first attack we've seen in which otherwise-inaudible messages are used to covertly control voice-activated assistants. In 2019 a team of researchers used parametric speakers to send ultrasonic audio to target the microphones of smart home systems, audible only where the two beams crossed; that same year another team found that devices with MEMS microphones could be triggered by sending commands as light rather than sound; and in 2020 another team sent malicious commands to smart devices by vibrating the table on which they sat.
While a true protection against NUIT would require modified hardware, Chen has some advice for those concerned about the attack: use headphones. "If you don’t use the speaker to broadcast sound, you’re less likely to get attacked by NUIT," she explains. "Using earphones sets a limitation where the sound from earphones is too low to transmit to the microphone. If the microphone cannot receive the inaudible malicious command, the underlying voice assistant can't be maliciously activated by NUIT."
The team's paper is to be presented at the 32nd USENIX Security Symposium in August; additional details, including a number of demos, are available on the project's website.