TinyRadarNN Links Two Neural Nets on a RISC-V CPU for Low-Power RADAR-Based Gesture Recognition

GAP8 RISC-V processor powers a RADAR-based gesture recognition system built in just 92kB of memory and using 21mW of power.

A team of computing researchers from ETH Zurich and the University of Bologna have released a paper on creating an ultra-low-power gesture recognition system based on short-range RADAR sensors and neural networks running on the free and open source Parallel Ultra-Low Power (PULP) Platform processor design.

"This work proposes a low-power high-accuracy embedded hand-gesture recognition algorithm targeting battery-operated wearable devices using low power short-range RADAR sensors," the team explains in the paper's abstract, describing the TinyRadarNN system it developed. "A 2D Convolutional Neural Network (CNN) using range frequency Doppler features is combined with a Temporal Convolutional Neural Network (TCN) for time sequence prediction. The final algorithm has a model size of only 46 thousand parameters, yielding a memory footprint of only 92kB."

The extremely low memory footprint isn't the only notable thing about the team's approach: Implemented on the RISC-V-based GAP8 processor, an implementation of the free and open source Parallel Ultra-Low Power (PULP) Platform many-core processor, the system drew just 21mW of power in live prediction mode — meaning it's extremely usable in battery-powered devices.

"Two datasets containing 11 challenging hand gestures performed by 26 different people have been recorded containing a total of 20,210 gesture instances. On the 11 hand gesture dataset, accuracies of 86.6% (26 users) and 92.4% (single user) have been achieved, which are comparable to the state-of-the-art, which achieves 87% (10 users) and 94% (single user), while using a TCN-based network that is 7500x smaller than the state-of-the-art," the team notes, highlighting a dramatic improvement in network size for their work.

"Furthermore, the gesture recognition classifier has been implemented on Parallel Ultra-Low Power Processor, demonstrating that real-time prediction is feasible with only 21 mW of power consumption for the full TCN sequence prediction network."

The work is available on arXiv.org now under open access terms.

Gareth Halfacree
Freelance journalist, technical author, hacker, tinkerer, erstwhile sysadmin. For hire: freelance@halfacree.co.uk.
Related articles
Sponsored articles
Related articles