Don't Tap with Your Mouth Open

TeethTap incorporates motion and acoustic sensors to control electronic devices with teeth gestures.

Nick Bild
5 years agoAI & Machine Learning
TeethTap device (📷: W. Sun et al.)

The most common way to interact with electronic devices is with the use of hands as an input source. There are times, however, when it is inconvenient to get our hands on our devices — answering a phone call by swiping on a smartwatch while carrying the groceries, for example. There are also situations where hand-based input is impossible, such as in the case of individuals with certain motor impairments.

A plethora of interface devices have been developed to address these needs in recent years. Joining the mix is a new gesture sensing gadget called TeethTap. It is a wearable, earpiece-based motion and acoustic sensor that can recognize up to 13 discrete teeth tapping gestures.

A 3D-printed earpiece outfitted with a pair of inertial measurement units (IMU) and contact microphones is placed just behind the bottom of the ear, where the jawline begins. Through careful positioning of the sensors, the developers were able to capture gyroscopic data whenever jaw position shifts. The microphone captures acoustic data from teeth gesturing. An ESP32 microcontroller-based Adafruit Huzzah32 development board sends the sensor data to a computer for processing over WiFi.

The custom processing pipeline receives data wirelessly from the earpiece. The initial step keeps an eye on the data stream to see if motion or acoustic data exceeded a predetermined energy threshold to indicate if a gesture may have occurred. Next, a secondary filter makes use of a support vector machine to remove any further unwanted noise (e.g. talking, chewing). The filtered data is then fed into a K-Nearest-Neighbor classification algorithm to assign a predicted gesture to the observation.

A small scale validation study including eleven participants was conducted to assess the performance of TeethTap. The participants were asked to perform each of the thirteen gestures multiple times in a random order. The data collected from this exercise was used to train the machine learning models. Two follow-up sessions were held later, in which the participants were asked to perform the gestures using the previously trained models. The 1,382 teeth gestures tested reached an average accuracy of 90.2%.

Recognizing that, for general use, the current form factor may not be entirely practical, the team envisions TeethTap being integrated into earphones or headphones in the future. That would go a long way towards making the device more usable in the real world, however, until it can be untethered from a larger compute resource, even if only by WiFi, it will still be of limited use as a general purpose input device.

Nick Bild
R&D, creativity, and building the next big thing you never knew you wanted are my specialties.
Latest articles
Sponsored articles
Related articles
Latest articles
Read more
Related articles