People who experience deafness or are hard of hearing have difficulty maintaining awareness of their surroundings. This can lead to some dangerous or inconvenient situations because being able to hear and recognize the sounds in our environments is integral to modern living. Imagine not being able to realize when an alarm is going off, or the microwave begins to beep. That is why a team of researchers at the University of Washington created an app for smartwatches that hears for the user and notifies them about which sound is currently being heard called SoundWatch.
At first, the development team attempted to use a phone by itself to pick up sounds, process them, and display messages to the user. However, they opted to use smartwatches instead because of their convenient size and increased portability, such as when exercising in a gym.
Before they embarked on this project, the team built a system called HomeSound, and it used several Microsoft Surface tablets that picked up noises throughout the house and acted like a network of interconnected displays. They were able to show the waveforms of the sound and the source. The tablets could also store a history of all sounds that had been picked up while the user was gone in case something was missed. The lessons gleamed from this foray were later incorporated into the SoundWatch.
The most common use of the microphone(s) on smartwatches is for making phonecalls and talking to digital assistants, but it can be repurposed for other uses too. In order to recognize sounds with a machine learning model, there must first be a large enough dataset on which to train it. So for three weeks, the researchers had a group of six participants go about their lives and complete surveys to build up a repository of data (31 hours of it).
Smartwatches have a relatively small battery and are unable to run very demanding tasks, so capturing noises and running them through the HomeSound classifier onboard was out of the question. To get around this issue, the watch sends sound data directly to the user's smartphone where it is then filtered and passed to the classifier. After the phone's task is completed, it sends the resulting label back to the watch for display. The amplitude (loudness) is also shown, along with the duration. The watch will emit a small haptic buzz to alert the user that a new sound has been detected.
This system greatly assisted the participants using it, as they were able to know if a car alarm was going off in the distance or if they had inadvertently left the faucet running. However, this is not the end for the project, as the team is now working on another revision, the HoloSound. As the name implies, this device uses the Hololens augmented reality headset to display real-time captions to hard-of-hearing users. There is also another focus of picking out individual sounds and pinpointing where they are coming from, such as a siren. These innovations are an excellent use of edge AI computing, and they can lead to devices that greatly help the disabled.