Dancing with Myself

HulaMove makes your digital devices submit to your every command through the power of your dance moves.

Nick Bild
3 years ago β€’ Machine Learning & AI
HulaMove controlling VR application (πŸ“·: X. Xu et al.)

It has often been said that necessity is the mother of invention. Case in point β€” HulaMove. People have long been frustrated with the user interfaces for their electronic devices, wondering why, for example, they have to swipe on their phone screen to answer a call. Would it not be much easier if we could do a little hula dance to not only take that call, but also brighten the day of any bystanders?

Alright, so the proverb does not always hold true, but hey, sometimes we do things because we can do them, not because we need to. In that vein, HulaMove was developed in a collaboration between researchers at the University of Washington, Seattle and Tsinghua University.

The HulaMove technique does not require any specialized hardware; it makes use of the inertial measurement unit (IMU) already present in most smartphones. The phone can be at any location near waist level, like in a pocket. IMU data is first calibrated by transforming the movement of the phone to the movement of the body β€” by doing so, HulaMove will not be sensitive to the exact location of the phone on the body. Next, a gesture detection module will analyze the signals to determine if an intentional gesture is likely to have occurred. If so, the final step is a gesture recognition module that predicts the intended gesture.

A binary classification five-layer convolutional neural network (CNN) trained on 1,600 gesture examples was used in the gesture detection module. The approach for recognizing the eight chosen gestures involved a hierarchical tree-CNN, which divided the task into four easier binary or ternary classifications. The hierarchical tree consisted of a type classifier (to distinguish rotating and shifting motion), a rotation classifier (to determine which rotation gesture occurred), a shift (left/right) classifier (to recognize directionality of shifting gestures), and a shift (forward/backward) classifier (to distinguish forward from backward motions). The tree-based approach was found to both increase accuracy and reduce the amount of training data required as compared with a single eight-class classifier.

Working with a data set of 14,000 gesture samples and 120,000 noise samples, the researchers found HulaMove to have achieved an overall accuracy of 97.5% in gesture recognition.

Joking about funny and perhaps socially inappropriate gestures made while answering the phone aside, there are some potentially interesting applications for the technique. First and foremost is in virtual reality, where HulaMove can create a more immersive experience involving more of the body, for example in virtual object avoidance.

As for daily use of the device, the team has not yet evaluated the robustness of the system to noise during long non-gesture periods. They intend to investigate this in the future, but it remains to be seen whether or not people will be scrambling to use HulaMove in their normal daily routines.

Nick Bild
R&D, creativity, and building the next big thing you never knew you wanted are my specialties.
Latest articles
Sponsored articles
Related articles
Latest articles
Read more
Related articles