For as long as long-distance communications technologies have existed, people have been widely concerned that the privacy of these communications could be violated. It’s been the case with letters, telegraphs, telephones, emails, text messages and more-recently VoIP services and social media messaging platforms.
As irrational as it can sometimes seem - these concerns do come from a pretty genuine place. Whenever we use these long-distance methods of communication, we’re putting our faith in the provider of that service to respect our right to confidentiality and privacy. That’s not always easy to do - especially given the long history of that trust being abused.
It didn’t take long after its invention for business and government users of the telegraph to realise that, without encryption, their messages could easily be intercepted or altered. Fast-forward about a century and an American CIA program dubbed HT-Lingual actively intercepted, opened and photographed more than 215,000 letters over two decades (this program was terminated in 1973).
More recently, in 2013, the Snowden leaks highlighted how insecure massively-popular communication technologies like the internet proved to be in the face of the NSA’s sprawling, catch-all PRISM program. Microsoft, in particular, fell under some scrutiny after it was reported that the company worked closely with the NSA to allow the agency to circumvent the encryption used by both Skype and Outlook.
So, as tin-foil-hat as it sounds, it was only natural that, with the rise of smart speaker products like the Google Home and Amazon Alexa, the question of privacy would again rise.
Are people inviting these new, exciting, “smart” products into their homes only to be taken advantage of?
Yes - but not in the way you think.
The way that the Google Assistant devices work is by actively listening for a “hotword" or specific phrase -- by default, this is usually set to “OK, Google” or “Hey Google”. This is why, when you first set up the Google Assistant, it'll ask you to say these hotwords aloud - so that it has a locally-stored audio sample to match recordings against.
In theory, this trigger phrase acts as a key that unlocks the recording function of the device. Once heard, the device then records a few seconds of audio, sends it to the cloud, analyses it and then delivers the server’s response to the user.
Google say as much in their own online FAQ about Google Home. According to them, “Google Home listens in short (a few seconds) snippets for the hotword. Those snippets are deleted if the hotword is not detected, and none of that information leaves your device until the hotword is heard.”
“When Google Home detects that you've said "Ok Google" or that you've physically long pressed the top of your Google Home device, the LEDs on top of the device light up to tell you that recording is happening; Google Home records what you say, and sends that recording (including the few-second hotword recording) to Google in order to fulfill your request.”
So while your Google-powered smart speaker is constantly listening to you, it stores that ‘ambient’ data locally and is constantly overwriting it once it fails to detect the any wake words.
That said, there were some reports late last year of a fault in the hardware of the new Google Home Mini that caused a small number of units to be stuck in recording mode. However, Google has since rolled out software patches that resolve this issue by disabling the device’s touch-pad.
"We take user privacy and product quality concerns very seriously. Although we only received a few reports of this issue, we want people to have complete peace of mind while using Google Home Mini," the Google spokesperson said at the time.
As for the recordings that the device makes whenever it does detect those wake words, these are stored - and accessible to you - via the Google Home app. Using the app, you can listen back to audio recordings of any ‘interaction’ you’ve ever had the Google Home. If that makes you a little uneasy, it should.
Thankfully, Google insists that you can delete those recordings through the My Activity section of the app anytime. You can also disable the online storage of these recordings, though Google have indicated that this will more-or-less prevent you from getting the full smart-speaker experience -- as it prevents the Assistant from learning from your interests and behaviors.
It’s not impossible that this data could be backed up in some form by Google elsewhere, but given this disclaimer it seems improbable.
Google also note that “when you delete items from My Activity, they are permanently deleted from your Google Account. However, Google may keep service-related information about your account, like which Google products you used and when to prevent spam and abuse and to improve our services.”
Again, the answer is closer to sort-of than a solid yes or no. Like the Google Home, Amazon’s Echo products are always listening but not necessarily always recording.
According to Amazon, “Amazon Echo, Echo Plus, and Echo Dot use on-device keyword spotting to detect the wake word. When these devices detect the wake word, they stream audio to the Cloud, including a fraction of a second of audio before the wake word.”
Essentially, the Amazon Echo (and all its variants) utilise a practice of listening without actually recording anything. Then, once they hear the wake word, they activate, stream a few seconds of audio to the cloud and wait on, then deliver, a response from the server.
Like the Google Home, audio recorded by the Echo is stored online and, again like the Home you can review and delete your interactions with Alexa by visiting History in Settings in the Alexa App.
Good question! Unfortunately, this is where things get a bit sticky.
According to Google, “Your security comes first in everything we do. If your data is not secure, it is not private. That is why we make sure that Google services are protected by one of the world’s most advanced security infrastructures. Conversations in Google Home are encrypted by default.”
While the above explanation is pretty vague, it does make it sound like your recordings are being stored pretty securely. It helps that Google are one of the few tech giants to have never really suffered or disclosed any sort of large-scale security breach.
That’s not to say it won’t or couldn’t happen. However, based on the evidence and knowledge available to us now, it seems unlikely.
That said, Google don’t shy away from the fact that your data -- while secure -- isn’t sitting idle. The company say they do that “to make our services faster, smarter, and more useful to you, such as by providing better search results and timely traffic updates.”
“Data also helps protect you from malware, phishing, and other suspicious activity. For example, we warn you when you try to visit dangerous websites. Also, on surfaces where we show ads, we use data to show you ads that are relevant and useful, and to keep our services free for everyone.”
“Google Home learns over time to provide better and more personalized suggestions and answers”, they also say.
What this means in the broadest terms is that Google will - at a baseline - use your data to make the way it stores your data secure. Then, beyond that, they'll likely use that data to tailor your ad profile in much the same way as they do your one searches or Play Store purchases.
If managing the parts of your life that Google Home will gather snippets of it sounds a lot like managing your social media security settings or the app permissions on your Android phone -- that’s because it is.
This means that, to a degree, the security of your Google Home data is only going to be as insecure as you allow it to be. The usual rules apply here: don’t connect your Google account to apps or services that seem a bit dodgy-looking and always read what parts of your data are being accessed and by whom.
Amazon insist similar measures are in place for their Echo devices. However, they’ve a bit of a reputation for being less-transparent than Google when it comes to these things -- and their track-record for security breaches isn’t quite as strong either.
As scary as it sounds, there haven’t been any major or well-documented exploits that have seen Google Home speakers be hijacked. At least, not yet that we know of.
There was an incident in 2017 where a TV advertisement by Burger King hijacked the smart speakers of viewers by loudly and clearly asking “Okay Google, what is the Whopper burger?” This particular trick has actually been highlighted -- deliberately and accidentally -- a few times in the past, and should it continue to be exploited by advertisers to the frustration of users, it’s not impossible to imagine that Google and Amazon will investigate finding some sort of hardware fix that can distinguish between a real or simulated voice.
Things are a little less rosey for the Amazon Echo. In 2017, MWR InfoSecuritysuccessfully compromised an Amazon Echo by exploiting a vulnerability in the device to turn it into a 'wiretap' without affecting its overall functionality.
According to MWR "By removing the rubber base at the bottom of the Amazon Echo, the research team could access the 18 debug pads and directly boot into the firmware of the device, via an external SD card, and install persistent malware without leaving any physical evidence of tampering. This gained them remote root shell access and enabled them to access the 'always listening' microphones."
Amazon say that the 2017 Amazon Echo and Amazon Dot models have since been modified to eliminate this vulnerability. However, the emergence of further exploits - both for the Echo and the Google Home -- in the future feels like an uncomfortable but very real possibility.
In a world where both KRACK and Spectre vulnerabilities have been revealed in recent months, a hack for smart speakers suddenly doesn’t seem so wild. Unfortunately, we’re unlikely to learn of any such exploit -- until after the damage has been done.
We spoke to McAfee's Australian Chief Technology Offier Ian Yip about whether or not ordinary customers should be concerned.
According to him, "Today, “smart means insecure” when it comes to emerging technology built for consumers. The number of cyber incidents related to smart technologies will continue to rise as cyber-attackers typically break in via the weakest points on any network."
"To date, there have not been any high-profile incidents of smart speakers being exploited by cyber-attackers. However, the possibility cannot be discounted given the inherent risks in all things “smart”. One only needs to look at the Mirai incidents of late 2016 that used compromised smart devices against victims which included Twitter, Netflix, and Reddit.""As such, consumers should remain vigilant, always be looking to improve their cyber safety awareness, and use technology to help where relevant," he says.
Probably. The theoretical pros and cons of spying on their customers don’t quite add up for either Google or Amazon - revealing this to be an unlikely scenario. They’re both hugely popular multi-billion dollar global companies in competition with one another. Even if you personally don't trust these companies to be accountable to you on an individual level, they are almost-certainly accountable to their shareholders - and they don’t need to be spying on you 24/7 to keep them happy .
As easy as it is to embrace your inner conspiracy-theorist and imagine these unethical, faceless corporations holding on to that data and either selling it off to the highest bidder, the potential backlash to being caught for such machinations far outweighs any clear advantage that might offer.
Both Google and Amazon already have troves upon troves of data on their customers. Could 24-hour recordings of every customers' home life realistically add enough marketing value to that to be worth the risk of being found out? Probably not.
Unfortunately, when it comes to government-sponsored surveillance, things are a little more murky. For one, it is very possible that, as the legal system catches up with this technology, companies like Amazon or Google may or may not be compelled by courts to hand over the recordings taken by smart speakers they store over to law enforcement or other authorities.
On one hand - there was a case in 2017 where Amazon refused to hand-over data gathered by Alexa to law enforcement authorities investigating a murder in Arkansas. On the other, when asked by Gizmodo in 2016, the FBI could neither confirm nor deny that the agency had ever wiretapped an Echo.
Most didn’t believe that the kinds of mass surveillance operations like HT Lingual or those detailed in the Snowden leaks were possible until they were revealed by whistleblowers. Again, It’s a scary possibility that an exploit these devices for surveillance purposes already does exist and we don’t know about it yet.
Should I be concerned?
All things considered, the answer here is probably yes.
However, realistically, you shouldn't be any more concerned than you might be about the security of your mail or phone line. As we mentioned at the start of this article, no major communication technology has managed to escape being used for surveillance purposes. Therefore, it’s entirely possible -- maybe even inevitable -- that somewhere down the line, smart speakers could join the list.
Ultimately, it really comes down to whether you can mount sufficient faith in the idea that the companies behind these smart speakers will respect your right to confidentiality and privacy. Thus far, apart from regular, often-healthy, skepticism, there’s nothing to hint or suggest that they won’t do that.