Machine Learning on an ATtiny85?

Doing tiny machine learning on even tinier processors.

The Microchip ATtiny85. (📷: Microchip)

Announced to the public about a year ago now, TensorFlow Lite for Micro-controllers is a massively streamlined version of TensorFlow. Designed to be portable to “bare metal” systems, it doesn’t need either standard C libraries, or dynamic memory allocation. The core runtime fits in just 16KB on a Cortex-M3 and, with enough operators to run a speech keyword detection model, takes up a total of 22KB. The official port to the Arduino environment arrived just over three months ago.

But, what if you want to get really tiny? The ATtiny85 in fact, which has just 8KB of program memory. Surely you can’t run machine learning algorithms on something that lightweight. It’s not possible?

Well, it turns out it is possible, and that’s exactly what one researcher who goes by the name ‘Eloquent Arduino’ has managed. You just have to decide what is, and isn’t, machine learning. When a lot of people talk about machine learning they use the phrase almost interchangeably with neural networks, but there is a lot more to it than just that.

The MicroML generator was created as an alternative to TensorFlow. Instead of neural networks, MicroML supports Support Vector Machines (SVMs). Good at classifying highly-dimensional features, they are easy to optimise for RAM-constrained environments. While TensorFlow Lite for Micro-controllers squeezes into its runtime into 16KB, MicroML lets you deploy models into just 2KB of memory.

So something like the gesture classifier example can be rebuilt using an SVM classifier to run on a board like the original (classic) Arduino Nano.

The problem is that not all models will fit inside the memory limitations of something like the original Arduino Nano. Built around the ATmega328 it has just 32KB of flash memory, and 2KB of RAM.

“The core of SVM are support vectors: each trained classifier will be characterised by a certain number of them. The problem is: if there’re too much, the generated code will be too large to fit in your flash. For this reason, instead of selecting the best model on accuracy, you should make a ranking, from the best performing to the worst. For each model, starting from the top, you should import it in your Arduino project and try to compile: if it fits, fine, you’re done. Otherwise you should pick the next and try again. It may seem a tedious process, but keep in mind that we’re trying to infer a class from 90 features in 2 Kb of RAM and 32 Kb of flash…”

What MicroML does is use trained models generated using the scikit-learn Python library, and convert them to C code. It’s this code that you then import into your Arduino project.

However, because MicroML exports plain C code, it can run on any embedded system, not just inside the Arduino environment. Which means that you can take an example like the colour identification example and port it, not just to the Arduino Nano, but also to something like the ATtiny85.

The ATtiny85 is an 8-bit AVR RISC-based micro-controller with 8KB of flash memory, and 512 bytes of RAM, and the trained model fits comfortably on board. In fact, it turns out that the SVM model for the colour example requires just 21 bytes of RAM. That means that this model at least can be run on even the ATtiny45, which has only 4KB of flash memory, and a meagre 256 bytes of RAM.

This a a classic example of what I call “capable computing,” computing that is ‘good enough’ to work. What it means here is that a machine learning researcher can successfully identify bananas🍌 on a chip where you couldn’t possibly get even a lightweight inference engine like TensorFlow Lite for Micro-controllers to run.

While I’m really excited about TensorFlow Lite for Micro-controllers, and the new accelerator hardware which operates at even higher power, squeezing machine learning onto hardware like the ATtiny85 and ATtiny45 isn’t just a party trick. SVM is just as much machine learning, and just as useful in the real world, as the heavier neural network models used by TensorFlow. In fact, depending on your use case, it can be more reliable and accurate.

So before you attack a machine learning problem with something like the Coral Dev Board from Google, or any other high powered accelerator hardware, thing about how appropriate it is and whether you can build and run a model with a dollar’s worth of hardware rather than a hundred an fifty dollars worth of cutting edge accelerator board.

Alasdair Allan
Scientist, author, hacker, maker, and journalist. Building, breaking, and writing. For hire. You can reach me at 📫
Related articles
Sponsored articles
Related articles