
Digital assistants deceived
Scientists in these two countries have found the possibility of deceiving these new digital assistants by commands sent via ultrasound. This means that they’ll possibly end up typing more than necessary, something which is harmful to workers.
The BBC published a story about scientists at Zhejiang University developing digital assistants that can respond to high-pitched voice commands by dolphins. These assistants have learned these auditory signals and are responding to them with accuracy. Researchers were able to make smartphones dial numbers of their choice, visit “suspicious” websites
The recent allegations are the subject of investigations at Amazon, Google, and other companies.
So-called “wake words” are words or phrases that are like a key for AI voice recognition for your smartphone. It’s possible to have any phrase as your wake word. Some examples include “hello Google”, “Hey Siri”, and, of course, Alexa”.
Researchers in China emitted voice commands through a speaker that was set to operate at ultrasonic frequencies. They said they were able to activate assistants on a range of Apple and Android devices, as well as “smart” speakers from metres away.
While speaking in a high-pitched voice, researchers in China were able to send voice commands to AI assistants on different devices. The installation was simple and required no wiring or glue (although still not possible without an outlet). The team of Princeton University researchers were also able to use the same technology in their office, activating the Amazon Echo smart speaker. According to a research study done in the United States, these attacks were highly effective because they had the potential to trick people into perceiving it as human speech.
A Chinese researcher suggests that someone could hide ultrasound messages in online videos, or broadcast them in public places, close to the victim/target. During the tests it was possible to make phone calls and visits to websites, and take photos.
In order to protect your digital assistants from deceit, there is a strong security measure. That is the voice-training feature of some of them, like Google Assistant, that could deter hackers. Hacking would not work on systems that only recognize voice commands from one person as media reports show. Also, Apple’s Siri requires the user to have unlocked the phone before allowing any ‘sensitive’ activity. Furthermore, both Apple and Google offer the ability to disable wake words. In this way, their digital assistants cannot be activated without permission.
ย If you enjoyed this story then you will probably also like this one.
Author: PC-GR
The World of Technology