Offline Model Guard (OMG) Aims to Protect End Users, Vendors of Edge Device Machine Learning Systems

Tying in to a secure enclave on Arm chips, OMG looks to keep users' and vendors' data private from each other with minimal overhead.

A team from the Technische Hochschule Nurnberg and the Technische Universit ̈at Darmstadt has released a privacy-focused tool for machine learning at the edge, designed to protect both companies and end users from each other: Offline Model Guard (OMG).

"Performing machine learning tasks in mobile applications yields a challenging conflict of interest: Highly sensitive client information (e.g., speech data) should remain private while also the intellectual property of service providers (e.g., model parameters) must be protected," the researchers explain of the core problem they are attempting to solve. "Cryptographic techniques offer secure solutions for this, but have an unacceptable overhead and moreover require frequent network interaction."

"In this work, we design a practically efficient hardware-based solution. Specifically, we build Offline Model Guard (OMG) to enable privacy-preserving machine learning on the predominant mobile computing platform Arm — even in offline scenarios. By leveraging a trusted execution environment for strict hardware-enforced isolation from other system components, OMG guarantees privacy of client data, secrecy of provided models, and integrity of processing algorithms."

The team's proof-of-concept prototype is built on top of an Arm HiKey 960 development board, running TensorFlow Lite — the implementation of the TensorFlow machine learning framework designed for microcontrollers. On the device, OMG runs through three phases: In the preparation phase, the enclave is loaded and cryptographically set up; in the initialisation phase, the system decides whether or not the locally-stored model should be decrypted; and in the operation phase, the machine learning task takes place.

There is an overhead, but the researchers claim it's minimal: In testing on keyword recognition, a plain TensorFlow Lite implementation of a system to solve a 12-class problem performed with 75 percent accuracy and a 379ms runtim; adding OMG into the mix retained the same 75 percent accuracy but with a slight increase to the runtime bringing it to a 387ms total.

The full paper on OMG is available under open-access terms on arXiv.org.

Gareth Halfacree
Freelance journalist, technical author, hacker, tinkerer, erstwhile sysadmin. For hire: freelance@halfacree.co.uk.
Related articles
Sponsored articles
Related articles