A Handy User Interface

A wearable that continuously tracks hand pose, even in the presence of obstructions.

Nick Bild
a month agoMachine Learning & AI

Hand gestures are a very natural way for users to interact with computers, so detecting hand gestures has long been a popular topic of research in human-computer interaction. Many current solutions require that a camera be present in the background to capture images of the subject’s hands. Such solutions necessitate pre-instrumentation of the environment, and therefore are not very useful for mobile devices. Additionally, these types of interfaces cannot work if there is an obstruction between the camera and the subject’s hands.

Several wearable solutions exist that can make gesture detection more portable. They use acoustics, cameras, or pressure sensors to detect a set of predefined gestures. Unfortunately, these solutions lack flexibility, as they can only recognize a limited set of hand positions. Current wearable devices that can continuously track fingers in any position either require cumbersome gloves to be worn, or cameras positioned at angles that make them impractical for daily wear.

A research team at Cornell University has developed a new device called FingerTrak that continuously tracks finger positions in 3D space with a minimally-invasive, wearable, gloveless device.

FingerTrak uses four low-resolution (32×24) thermal cameras, attached to a wristband, to capture silhouette images of the wearer’s hand. Each camera is connected to a Raspberry Pi 3B+ over I2C, which is used to transmit the image data to a ThinkPad X1 laptop.

A multi-view convolutional neural network running on the laptop is used to extract features from the four silhouette images. These features are then fed into a regression network that predicts the hand pose.

Using silhouette images as opposed to imaging the entire hand has an advantage in that it can accurately estimate the position of twenty finger joints in 3D space even when fingers are not fully visible. The system achieved a mean absolute error (MAE) of 1.2cm for joint positions, and a MAE of 6.46° for joint angles. The authors claim that their device is the first wearable technology to be able to reconstruct the position of an entire hand while the hand is holding an object.

FingerTrak is a very interesting idea, but the current implementation needs some work before it is ready for real world use. Perhaps a future revision will swap out the relatively bulky and power hungry Raspberry Pis with WiFi-capable microcontrollers for a more practical device.

Check out the following video for a real-time demo of FingerTrak’s capabilities.

Nick Bild
R&D, creativity, and building the next big thing you never knew you wanted are my specialties.
Related articles
Sponsored articles
Related articles