A Single Wrist-Worn E-Skin Sensor Can Track Individual Finger Movements, Researchers Reveal

Despite only using a single sensor, the system developed at Seoul National University and KAIST can detect complex finger movements.

Gareth Halfacree
4 years agoSensors / Wearables
A single strain sensor, hooked up to a deep learning system, can track finger movements. (📷: Kim et al)

A team of scientists from the Seoul National University and Korea Advanced Institute of Science and Technology (KAIST) have unveiled a simple electronic skin-like sensor which can read finger movement — even when only worn at the wrist.

The idea of using electronic skin as a sensor isn't new: Previous projects have seen so-called "e-skin" used for everything from magnetic sensing to health monitoring and haptic feedback. What makes this work different is that it allows for measuring the movement of the wearer's fingers — despite using just one sensor worn at the wrist.

"Conventional e-skins needs at least five to 10 strain sensors to accurately predict hand motions, with the required number of strain sensors increasing as the complexity of a target system increases," Professor Seung Hwan Ko explains in an interview with TechXplore. "The deep learned electronic skin sensor we developed, on the other hand, can achieve this job with only a single sensor."

The wrist-worn strain sensor itself is, arguably, the least interesting part of the system: The data from the sensor alone, it had previously been thought, would be sufficient to figure out that a finger was moving, but not which finger. With just eight seconds training per finger, though, a deep-learning system proved capable of figuring out which finger was being used at any given time.

"The RSL [Rapid Situation Learning] system uses transfer learning techniques to utilize knowledge on sensor behaviors obtained during previous training steps," the team shares. "The parameters for the LSTM and dense layers are then transferred from the pre-trained model to the new model. After retraining for around 5 min with the newly collected data, the model is then ready to generate the hand motions of the new user.

"Through [the] RSL system, all steps required for generating the hand motions of a new user are processed automatically. Typically, the temporal behaviour patterns of the sensor signals that were already previously analysed by our pre-trained model is transmitted to the new network. Consequently, the retraining time is massively reduced because the network only needs to retrain its mapping functions to map input values to a different range of sensor values."

The researchers' work has been published in the journal Nature Communications under open-access terms.

Gareth Halfacree
Freelance journalist, technical author, hacker, tinkerer, erstwhile sysadmin. For hire: freelance@halfacree.co.uk.
Latest articles
Sponsored articles
Related articles
Latest articles
Read more
Related articles