Stochastic Ternary Quantization Delivers TinyML Models More Robust to Adversarial Attacks

Quantization gets big models scaled down for microcontrollers — but STQ makes them less vulnerable to attack, researchers say.

Researchers from the University of Glasgow and the University of London, working with STMicroelectronics, have come up with an approach that they say can help protect on-device machine learning models running on microcontrollers — so-called "tinyML" — from adversarial attacks.

"Reducing the memory footprint of Machine Learning (ML) models, especially Deep Neural Networks (DNNs), is imperative to facilitate their deployment on resource-constrained edge devices. However, a notable drawback of DNN [Deep Neural Network] models lies in their susceptibility to adversarial attacks, wherein minor input perturbations can deceive them," the researchers explain. "A primary challenge revolves around the development of accurate, resilient, and compact DNN models suitable for deployment on resource-constrained edge devices."

The team's approach to protecting models against such attacks — categorized as black-box and white-box attacks, where the attacker either has no or full knowledge of the model's characteristics respectively — revolves around specific training using the QKeras quantization-aware training framework. This training, which improves upon QKeras' existing support for adversarial defense via Jacobian Regularization (JR), delivered a prototype model that, the researchers say, is considerably better positioned to protect itself against attack.

The model, created using a co-optimization strategy based on Stochastic Ternary Quantization (STQ), is claimed to retain a small footprint suitable for use with resource-constrained devices — targeting, in the team's experiments, the STMicro STM32H735GDK — and showed improved performance over rival designs when tested against the CIFAR-10 and Street View House Numbers (SVHN) image and Google Speech Commands audio datasets. More importantly, it also showed improved robustness against both black- and white-box attacks — though, the team admits, the approach has not yet been tested against the latest known attacks nor on state-of-the-art models.

A preprint detailing the team's work is available on Cornell's arXiv server.

Gareth Halfacree
Freelance journalist, technical author, hacker, tinkerer, erstwhile sysadmin. For hire: freelance@halfacree.co.uk.
Latest articles
Sponsored articles
Related articles
Latest articles
Read more
Related articles