Never Stop Learning

TinyML can add smarts to microcontrollers, but online learning is needed to keep them smart.

Nick Bild
3 years agoMachine Learning & AI
Experimental setup (📷: H. Ren et al.)

If it seems like you are hearing about tinyML every time you turn around these days, that is probably because, well, you are hearing about tinyML every time you turn around. And rightly so — the democratization of machine learning and artificial intelligence is a big deal. In addition to making the technologies more accessible, tinyML also has implications for protecting privacy, reducing application latency, and improving energy efficiency.

Most of the advancements in tinyML tend to focus on shrinking model complexity while maintaining accuracy, or optimizing models for particular hardware platforms to reduce inference latency. But that is only part of the story. Models need to be periodically retrained as well, and that typically means heavy computation on decidedly non-tiny hardware.

This means that tinyML models are static, may have trouble adapting to new data, and cannot be adjusted to cope with new scenarios. Fed up with this state of affairs, a group at Siemens AG have developed TinyOL (TinyML with Online-Learning) to address the need for online learning on microcontrollers.

TinyOL can be attached to an arbitrary, existing neural network as another layer. It is able to continually learn from new data as it comes in, thereby keeping the model up to date at all times. The method is also able to modify model structure to accommodate new classes.

Neural networks are stored in a read-only array in a microcontroller’s Flash memory. The additional TinyOL layer that is added is stored in RAM, however, so it can be modified as needed. This allows weights in this layer to be updated as new data comes in. The TinyOL add-on is an output layer, which is how it is also possible to modify the output classes.

The approach was tested on an Arduino Nano 33 BLE Sense development board (Arm Cortex-M4 CPU clocked at 64 MHz, 256KB SRAM) TinyOL was attached to an existing autoencoder neural network. A USB fan was included in the experiment to determine if their model could recognize various vibration patterns it produced using input from the onboard accelerometer.

The results showed that TinyOL’s online learning mechanism improved the model’s ability to classify vibrations more accurately over time. They also showed the model’s ability to adapt to new data by classifying new vibration patterns that were not included in the initial training data.

TinyOL may be the next step in tinyML, but it is not the final step — there is still much work to be done. The research group is currently working to design algorithms that will allow for full-size on-device training. They also plan to support more optimizers and operations of neural networks, and to explore training with reduced numerical precision.

Nick Bild
R&D, creativity, and building the next big thing you never knew you wanted are my specialties.
Latest articles
Sponsored articles
Related articles
Latest articles
Read more
Related articles