Researchers in the United States and China have found that AI assistants, such as Siri or Alexa, can be controlled with “hidden” commands. The method uses imperceptible audio signals for human ears, which can even be accommodated in music.
Inaudible voice commands
For people, it’s just a song or a noise, but for Siri and Co., it is the invitation to open doors or call toll-paying numbers. This refers to hidden messages in audio signals. Already last year, researchers managed to hide speech commands in frequencies that are not perceived by the human ear. In a video that describes the so-called “DolphinAttack”, the behavior is easy to see:
Furthermore, it says in the description:
“DolphinAttack was able to deliver covert voice commands to 7 modern voice recognition systems (eg Siri, Alexa) to activate the always-on system and to trigger various attacks, including activating Siri to initiate a FaceTime call on the iPhone, activation Google Now to put the phone into airplane mode and even manipulate the navigation system in an Audi car. ”
The technique shown is still very limited, so that an abuse has not yet taken place. The distance between the attacker and the target may only be a few meters. This does not mean, however, that the audio attack does not pose a threat. At the University of Illinois, the system was expanded to 7.5 meters. The audio pattern could be leaked through many ways. It should also be possible to implement this in music.
For security reasons, Apple is strict on certain HomeKit-related Siri commands and locks them behind device passcodes if users have enabled this protection feature. If, for example, the front door with a connected Smart Lock is to be opened by voice command, the password must also be entered on an iPhone or iPad. The HomePod, however, lacks this functionality. Furthermore, the “Hey Siri” command on the iPhone or iPad is calibrated to the user’s voice.