Source: Venture Beat
Apple’s Siri, Amazon’s Alexa, and Google’s Assistant were meant to be controlled by live human voices, but all three AI assistants are susceptible to hidden commands undetectable to the human ear, researchers in China and the United States have discovered. The New York Times reports today that the assistants can be controlled using subsonic commands hidden in radio music, YouTube videos, or even white noise played over speakers, a potentially huge security risk for users.
According to the report, the assistants can be made to dial phone numbers, launch websites, make purchases, and access smart home accessories — such as door locks — at the same time as human listeners are perceiving anything from completely different spoken text to recordings of music. In some cases, assistants can be instructed to take pictures or send text messages, receiving commands from up to 25 feet away through a building’s open windows.
Researchers at Berkeley said that they can modestly alter audio files “to cancel out the sound that the speech recognition system was supposed to hear and replace it with a sound that would be transcribed differently by machines while being nearly undetectable to the human ear.” Princeton University and China’s Zhejiang University researchers enhanced the attack by first muting the AI device so its own responses would also be inaudible to the user.
The novelty here is the unhearable nature of the secret commands. TV shows and commercials have openly and deliberately triggered certain digital assistants using verbalized phrases, but hiding the phrases is the sonic equivalent of subliminal advertising. There are not yet laws against triggering AI devices with hidden phrases, however, potentially enabling the practice to be exploited without straightforward legal consequences.
If the security issue isn’t fully addressed — although it most certainly will be — the number of potential breaches could be staggering. As the Times points out, phones and speakers with digital assistants are expected to outnumber people by 2021, and over half of American households will have one or more smart speakers by then.
All three of the digital assistant makers are apparently already aware of the vulnerability, though they were vague in explaining existing mitigations. Amazon claims to have taken unspecified steps to ensure Echo is secure, Google said that Assistant has features to mitigate undetectable commands, and Apple said that its devices have precautions and limitations precluding certain of the commands. It’s unclear whether subsonic audio filtering alone will be enough to address the issue, but it’s quite possible that a simple software patch will be enough to remove the risks.
Source: Venture Beat