Is This Thing On?

The Unthing tests if smart speakers are inappropriately eavesdropping by playing AI-generated conversations to see if related ads appear.

Nick Bild
2 years agoSecurity
The Unthing probes the security of voice assistants (📷: Roni Bandini)

Smart speakers sure are useful, but can we trust them? A person needs to have a lot of confidence in a company to put their always-listening microphones in their home and allow them to potentially transmit their private conversations to a remote cloud computing system. For many people, that is a gamble they are not willing to take. But millions of others have no qualms about it and enjoy the convenience that these devices provide them with.

These voice assistants are typically designed to only start recording audio after detecting that a wake word has been spoken. This safeguard is intended to maintain the user’s privacy, however, over the years some users of these smart speakers have noticed some odd occurrences. For example, some users have noticed that after discussing a particular topic, they wind up getting some suspicious advertisements served to them that seem to have been triggered by that conversation. Of course there are other explanations for these situations, like the fact that these companies often target users with ads that are based on the preferences of other individuals that they share many similarities with.

But ridding oneself of the nagging suspicion that they are being spied on is not easy. Engineer and hardware hacker Roni Bandini recently ran into a situation where some ads seemed suspiciously targeted at the topic of a private conversation he just had, so he wanted to find out if it really was a coincidence or if the paranoia was justified. For this reason, Bandini created a little gadget called The Unthing to run some experiments.

The name of The Unthing is a reference to The Thing, which is a well-known covert listening device from the Cold War era. But in this case, The Unthing seeks to sniff out inappropriate snooping by voice assistants. It does this by using OpenAI’s API to generate convincing conversations with ChatGPT that drop in mentions of specific products. These conversations are converted to audio via a text-to-speech pipeline, then are played throughout the day next to a smart speaker that should not be listening in, since the wake word is not being spoken. The idea is to see if these artificial conversations lead to being presented with ads for products that would otherwise be unexpected.

The Unthing is controlled by a Python script that contacts the OpenAI API to generate conversations and convert the text responses into MP3 audio files. These MP3s are then transferred to a DFPlayer Pro that is connected to a small speaker to play the conversations near a voice assistant. A set of 10 pushbuttons on a Fermion AdKey Board are also included to control the device.

This is a clever build, but Bandini has not yet mentioned whether or not the fake conversations had any impact on the ads that were served to him. It will be interesting to hear updates on this in the future, however, it will still not be conclusive as there is the possibility that the ad selection was based on some other criteria. For more certainty on the matter, you might consider combining The Unthing with Speaker Snitch.

Nick Bild
R&D, creativity, and building the next big thing you never knew you wanted are my specialties.
Latest articles
Sponsored articles
Related articles
Latest articles
Read more
Related articles