Hey, Siri: Exploit Me

Hey, Siri: Exploit Me

This morning, I was listening to a podcast on my Apple iPhone, when – quite unexpectedly – my listening reverie was broken by the familiar voice of Siri, Apple’s voice activated search assistant, wanting to know “which Barry do I want to call?”

What?

I hadn’t asked Siri to do anything. But apparently, the sequence of words coming out of the particular podcast that I was listening to sounded like I did.

A feature of the latest iterations of iOS (8.0+) is that Siri will activate – if your device is plugged into a power source – simply by saying “Hey, Siri.” I had witnessed a demonstration that anyone within earshot – even recorded someones – can activate the feature.

This phenomena isn’t unique to Apple.

Microsoft’s XBox One product had a similar spate of reports about their devices activating, whenever Aaron Paul’s XBox One commercials where playing, and the device heard its activation pass phrase (“XBox On!” Damn it!).

The Google Chrome browser also reacts aurally, when it hears the phrase “OK, Google.” While listening to the This Week in Google podcast a few weeks back, host Leo Laporte inadvertently activated Chrome by uttering the phrase.

But beyond being annoying byproducts of our devices becoming more helpful and predictive, these incidents actually raise a real question: can this software behaviour be shaped into a real world attack vector?

Dude… are you even serious?

Totally.

Let me unequivocally state that my intent is not to provide a recipe for how this might work, or how one might stage such an attack.

But let’s talk this out a bit, to see: could this type of exploit actually work?

The biggest security vulnerabilities in almost every system, even today, isn’t your firewall being cracked by international hackers, or brute force attacks on your databases; it’s people in position of trust, inside your organization (with intimate knowledge of how your systems work), and social engineering.

Let’s consider the following scenario:

  • An activation phrase, plus a recorded “payload”, is embedded in an online video or podcast.
  • The payload media is played within earshot of a device with an automated attendant, with a well known activation phrase.
  • The content of the attack vector payload message is some call to action (send me a password, withdraw cash, deposit money, what’s your PIN) for a receiving party, who doesn’t know the message has originated from an automated prompt from an artificial assistant.

Sound silly or implausible? The scenario above has all the components of a classic security attack vector:

  • A delivery mechanism (recorded message, podcast, or video)
  • An exploit (automated software that uncritically reacts to instructions from anyone)
  • Delivery mechanism (SMS, email, messaging)
  • Payload (socially engineered call to some action)

attack vector artifacts

Admittedly, this sounds like a pretty far-fetched scenario.

But the infamous Nigerian 419 scam is based upon nothing more than gullibility and human greed – and has been around for years. Imagining that our personal assistants can send out malicious automated instructions to our friends – and have those communications believed and trusted – is actually a trivially small leap of faith to make, given the frailties of the human condition, and how we now predominantly communicate with one another.

One way that this type of potential exploit could be thwarted is to train your device to react to only your voice, sort of like an audible TouchId feature. Another – and to me, obvious – way that this class of attack could be stymied, is to change the default activation passphrase of your device, so that a broadly staged attack would never be able to get much traction.

As it stands today, though, the existing automated attendant systems out there – XBox, Chrome, Cortana, Siri – operate largely in default mode, with very few (if any) protections in place (aside from the admittedly limited nascent capabilities of the state of the art, today).

What are your thoughts? Is this simply being too paranoid, or will we begin seeing broadly targeted, socially designed attacks in the wild – delivered by trusted, soothing female voices in our pockets?

Hey Siri… is that REALLY you?

Advertisements

2 thoughts on “Hey, Siri: Exploit Me

  1. I’ve experienced this too. I never understood why a device can’t noise cancel out the audio they are playing from the mic to prevent accidental activation.

    Like

  2. I don’t really think you are being paranoid about this at all. Given the quantity of information we store in our phones, the continual growth in the capacity of these technologies to actually take action, and the simplicity of the attack.

    I think your suggestions accurately demonstrate that protecting from a broad/widespread attack is relatively low hanging fruit and these companies should really be thinking about social engineering vulnerabilities. Especially when we consider, again, the information stored on phones.

    However, I do not see it as such an easy task to prevent a singular target from being attacked. If an attacker wanted some detail or info from a specific person it would seem difficult even if the system recognized your voice or changing the pass phrase. Neither of those can very effectively be protected.. all someone would have to do is sit near you in a restaurant or other public setting with a recording device until you asked Siri something. Then they have what they need to attack…. Then the question really becomes is – what level of access should something as difficult to protect as a spoken command give us access to? Do we unflinchingly trust that all text messages are from who they say they are from?

    Great post – had a lot of fun imagining a new chapter of a Kevin Mitnick book in which he uses Siri to get in…….

    Like

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s