We act differently and more freely and we’re in private. The question is – do we have privacy when we think we’re all alone?
If you count your smartphone, smart TV, laptop, and maybe even a virtual assistant, how many microphones do you have in your home?
As more users are drawn towards convenience, they have started using hands-free assistants. But the problem is that the benefits of these devices come at a price. Law enforcement agencies already use smart devices to collect data evidence, making it important to learn your digital privacy rights.
There is a large segment of netizens who want to stay secure when they’re online. But there’s another segment of users who want more comfort even if it costs them their privacy.
According to privacy experts, the always-on, always-listening devices that we use for our convenience are a threat to security and privacy. There has been an incident where a child ordered expensive items through Amazon’s Alexa without getting parental consent.
And there was another incident where Alexa blurted out some porn websites when a toddler asked it to play a rhyme.
Last year, a Burger King ad made smart devices in people’s homes to look up the definition of one of their burgers, just by saying the right words.
And if you think that’s not creepy, several customers reported their Alexa started laughing for no good reason. While Amazon said that it was because the device misunderstood something else for the command, “Alexa, laugh,” it’s still pretty unsettling to hear someone laugh in your home when you’re presumably alone.
As it’s evident from these examples, there are several concerns related to smart assistants, and yet these devices are being embraced by millions. Reports suggest that the use of virtual private assistants is pretty high in the US. Gartner predicts that by the year 2021, spending on virtual assistant devices will cross $3.5 billion.
IoT Devices Aren’t New
IoT has evolved a lot from old chatbots such as Parry and Eliza. A chatbot was a piece of software that interacted with humans through a chat interface. A chatbot would typically look for keywords in the sentences you type. According to those words, they would send pre-programmed replies or transform the user’s sentence to make a coherent reply.
The modern virtual assistants don’t just reply to your voice commands but can also perform keyword searches, play music, order things from eCommerce stores, switch on the lights, and a lot more.
Some of the Best Known Virtual Assistants
There are several virtual assistants available in the market and they’re all popular by different names. Here are some of the most popular choices and their enabled devices:
- Amazon Alexa – Dot, Echo, Fire Tablet, Tab
- Google and Google Now – Google Home and Android phones
- Apple Siri – Mac, iPhone, and iPad
- Microsoft Cortana – Any PC running on Windows
No matter which of these assistants you choose, all of them have a very obvious Achilles heel – privacy. The companies that create virtual assistants need to maintain the fine balance between high-end features and user privacy.
Let’s see how these devices work and what the security concerns are.
How Smart Assistants Work and What Makes Them Vulnerable
The smart speakers (virtual assistants) that we have in our houses can perform a number of activities by using apps. These activities are termed as actions by Google and skills by Amazon. An action or a skill helps a virtual assistant have special features.
They allow users to interact with the assistant using an interface on their phone or tablet. There are several voice assistant apps to cater to the needs of users. This way, people can interact with their assistants even when they’re not around the device.
Developers are making use of this new market and are coming up with their voice assistant actions that can help in controlling their virtual assistants. It’s easy to say that the virtual assistant (and its related products) market is booming.
There are several skills and actions. For example, there are tens of thousands of such skills in the Skill Kit. And that’s not all – interested users can go through the Alexa Skill Blueprints released by Amazon and can create their own skills with almost no knowledge of coding.
For example, a user can add a “cat joke” skill that reads out a cat joke for the user. When the user says, “Alexa, tell me a cat joke,” that particular skill will be activated and it will read out a joke for the user.
While it’s great that the general public can create their own skills, the problem is that hackers can abuse this system and create seemingly harmless skills that can have a negative impact on the user’s privacy. If such an attack is conducted successfully, it can affect a number of users.
And such a large-scale attack is indeed realistic.
How Hackers Can Misuse These Virtual Assistants
A team of researchers examined the phonetic abilities of Google Home and Amazon Alexa and found that it’s possible to mimic legit voice commands to carry out a cyber-attack.
The team was formed with researchers from the University of Virginia, Indiana University Bloomington, and Chinese Academy of Sciences. The researchers discovered that attackers can use a phenomenon that they’ve called voice squatting. They also displayed the possibility of another attack – voice masquerading.
Let’s see what these two terms are and how they operate.
Both Google Home and Alexa come with a third-party developer system that allows other users to create actions and skills.
In voice squatting, a hacker can create a rogue skill that sounds like a legitimate skill. This rogue skill can open the virtual assistant when the user says a certain phrase.
For example, there might be a legit skill called “octopus facts.” A hacker might create a skill called “funny octopus facts.” Now when a user says, “tell me some funny octopus facts,” Alexa can invoke the latter instead of the former.
This will let the rogue skill open the device. Once the device is tricked into opening, it can be used to eavesdrop or record whatever the users are doing.
And once the “funny facts” have been stated, the rogue skill can pretend that it has yielded control to another skill. But it might stealthily operate in the background and transfer sensitive user information to hackers.
The research team explained voice squatting with the help of the Capital One app. If you want to open a legit banking app on your Amazon device, you have to say, “Alexa, open Capital One.” The researchers created a skill that will open if the user will say, “Alexa, Capital One, please” or “Alexa, capital won.”
This rogue skill can hijack Alexa and listen to user conversations. There were other examples as well. A legitimate skill, “rat game” was imitated to create rogue skills called “rat game, please” and “rap game.”
The study showed that this way of hijacking Alexa using voice squatting was effective half the times.
Voice masquerading isn’t something new. The study that showed how voice squatting can hijack smart assistants also displayed the effects of voice masquerading.
Last year in April, Checkmarx researchers demonstrated how voice masquerading can be used to conduct an attack. In this type of an attack, hackers would replicate (and not approximate as in voice squatting) legit skills. These skills will behave like real ones, allowing the attackers to eavesdrop on private conversations.
Both Google Home and Alexa allow volunteer skill termination. This allows a skill to terminate once it has made a voice response to a user request. This approach is used by weather and trivia skills.
Once a skill has told you the weather or replied to a trivia question, it will generally go silent or say “Goodbye” so the user can know it has stopped.
A rogue skill can say “Goodbye” but keep listening or play a silent audio file after the response so that the user assumes that the skill has ended. But in reality, it just keeps working in the background and keeps listens to your conversations.
Both voice squatting and voice masquerading have serious privacy implications and pose a significant threat to the users of virtual assistants. With the analysis of Amazon skills, researchers have concluded that there’s a possibility that similar attacks have already happened in real.
Hot Mics Recording You
Look around and you’ll find a number of voice-enabled devices in your home, creating a “hot mic network” that listens on to your conversations.
Devices like Google Home and Amazon Echo might just be listening for some keywords that will start them. However, there are chances that the device might misinterpret another phrase for their keyword and start recording.
And of course, there is always a risk that the device manufacturing company is making the product listen to your conversations and store them to be used by the corporation. Here are some facts about these assistant devices.
Virtual assistants keep a record of your requests and this data is used for targeted advertisements.
People Behind the Virtual Assistants Hear Your Conversations
Amazon has special teams that are employed to listen to the conversations recorded by Alexa. The job of these people is to transcribe, annotate, and feedback into the system to help Alexa understand human speech better.
However, the problem is that the people listening to these conversations are real people and can use your personal conversations for malicious intent.
While companies claim that it’s all automatic and in the cloud, they do need humans to make Alexa return more accurate results.
While there are options in the privacy settings to opt out of using your voice recordings for new developments, not many people know about it and thus their voices get recorded and listened to by other people.
As more people get to know about this, they might feel self-conscious and less private in their own space.
This makes it clear that apart from the risks from attacks including voice squatting and voice masquerading, users also face the risk of smart device manufacturing corporations spying on them.
Risk of Device Recordings Getting Hacked
The recordings and personal data that these companies keep can be hacked and thus your details might land in the hands of people who want to do more harm than just sending targeted ads your way.
The data that’s stored by smart assistants can provide advertisers a lot of information about your likes and dislikes and your shopping habits. It can also tell them about your schedule and this can be used for malicious intent.
For example, when you ask your virtual assistant to add a particular appointment to the calendar, it can know when you’ll be out of the house. Normally, this data is kept with the device company and it’s safe there.
However, this data is practically exposed at two points – when it’s in transit to the off-site servers and when it’s stored on the physical device.
Amazon says that its data is transmitted with https, it doesn’t tell anything about its security protocols. Without the information about security protocols, it’s difficult to know the security of data when it’s in transit.
Virtual assistants come with local storage to hold such data in its cache. This data can be hacked remotely or when a hacker has physical access to the device.
Your data logs can be deleted manually from Google Home and Amazon Echo using the device apps.
Compromised Virtual Assistant Stories
A Hacked Baby Monitor
A Washington family used a baby monitor inside their toddler’s bedroom and discovered that someone hacked into it and monitored their child remotely. The stranger was also able to speak disturbing messages into the device.
The hacker used a night vision lens to monitor the movements around the child’s room. This harrowing incident happened with baby monitors that are similar to virtual assistants as they have remote controlled speaker abilities.
This makes us wonder if smart assistants can also be hacked and controlled by other people.
There are cases of home-embedded IoT devices that got compromised. In 2016, DDoS attacks were targeted against Dyn LLC and exploited the vulnerabilities of millions of home-embedded devices including DVRs and webcams. The attack infected the devices with Mirai botnet and turned them into a bot army.
The Speaker Rebellion
In January 2017, channel CW6 aired a news segment about Amazon Echo speakers and their vulnerabilities. It discussed the case of a child ordering an expensive dollhouse from Amazon without parental consent.
During the discussion, one of the hosts said, “I love the little girl, saying ‘Alexa, order me a dollhouse.’”
With that, a command was given to Alexa devices all over San Diego and people complained that the voice assistant has purchased a dollhouse. This small line uttered on the telly was recognized by Alexa as a command. It’s similar to what happened in the Burger King commercial.
To user relief, Amazon assured all these people that their orders would be canceled and they don’t have to pay.
The Insecurity of a Nearby Mic – Are We Becoming Self Conscious Even in the Privacy of Our Homes?
Jay Stanley, a senior police analyst at ACLU, shared his experience about the self-consciousness that people have started to feel over virtual assistants.
He said, “I was at a dinner party recently with close friends where the conversation turned to some entirely theoretical, screenplay-writing-type speculations about presidential assassinations—speculations that would be pretty dicey should certain outside parties who did not know us and where we were coming from be listening in. Realizing this as we spoke, the group thought of our host’s Amazon Echo, sitting on a side table with its little light on. The group’s conversation became self-conscious as we began joking about the Echo listening in. Joking or not, in short order our host walked over and unplugged it.”
It’s this level of self-consciousness that the possibility of surveillance can bring. A simple dinner conversation can become uneasy as smart assistants sit there, listening on to each of our words. People want to be sure that their devices will not betray them.
Overall, smart assistants and other internet-enabled devices create a huge threat to privacy and these threats come from hackers, corporations, and the government.
Is There any Virtual Assistant That Takes your Privacy Seriously?
While no virtual assistant is 100% safe and secure, Siri is by far the most secure of them all. There are no user-accessible records that contain your previous queries because they are associated with random ID numbers and not your email or iCloud accounts.
And these IDs are deleted after six months.
It’s more complicated with Echo. The ultimate source for revenue here is not the cost of the device. Their main revenue comes from selling the personal information they collect from you. These details are sold to advertisers so they can target you better.
Advertisers want to show you more and more targeted recommendations. This is why they’re willing to buy user data and this is what drives profits home for these corporations.
Amazon maintains a conversation database that records the commands you give to your device. Audio data is encrypted as it leaves your home. However, the standard of the encryption is not known and hackers are looking for ways to get to this information.
In CES 2017, a number of smart things were presented from refrigerators to cars. And almost all of them came with a virtual assistant. This trend of including an all-listening assistant will create new issues related to not just digital security and privacy, but even physical safety.
Developers need to make sure that user security is among their top concerns. And if you’re a consumer, here are some tips that will protect you from the all-listening ears.
- Use Echo settings to password-protect shopping actions.
- Turn off the mic on your smart speakers. There’s a button on these devices. You’ll always have to remember to switch it off after using it, which reduces the usability but it does secure you a bit.
- Use security tools such as antivirus and anti-malware on your smartphones and computers to minimize the risk of data leaks.
- If someone in your house has a name that sounds like Alexa, change the wake word. Otherwise, whenever you call that person, Alexa will start listening to your conversations.
- Make sure the virtual assistant you buy comes with a mute button. This way, you can know for sure nobody is listening to sensitive conversations. Keep in mind though, that hackers can remotely deactivate the mute button so it’s best to unplug the device when having private time.
- When you buy a new device, visit the privacy settings and make the necessary changes to keep your conversations private.
- Try to have sensitive discussions at a place not too near the virtual assistant. Most assistants are made to pick up voices from a considerable distance so make sure you’re far enough from it.
- Remember to regularly delete your request logs from the assistant. This will minimize the personal information that’s available for hacking.
- Change the passwords of all devices that are connected to your home network. Default passwords are easy to hack. Also, make sure you install software patches as soon as they are released. This way, you can fix all the vulnerabilities so your system will be stronger with each update.
Secure your digital life with Surfshark
Only $1.99/mo. 30-day money-back guarantee with every planBuy NOW