Tiny Models, Big Performance

MicroHD optimizes hyperdimensional computing for tinyML, reducing resource usage while maintaining accuracy and enabling edge AI deployment.

Nick Bild
2 years agoAI & Machine Learning

At the present time, we are witnessing an unprecedented boom in the adoption of artificial intelligence (AI) across many sectors. From personalized recommendation systems to autonomous vehicles, AI-powered technologies are reshaping our daily lives and transforming entire industries. One significant trend within this AI landscape is the rise of tinyML, which involves deploying machine learning models on resource-constrained edge computing devices.

This surge in tinyML's popularity is fueled by several factors. The technique offers numerous advantages over traditional cloud-based solutions, including reduced data transfer, lower latency, and enhanced privacy. With the proliferation of Internet of Things devices and the increasing need for real-time processing, tinyML is becoming essential for enabling intelligent decision-making directly at the edge.

However, many of the most powerful machine learning models are much too large and computationally intensive to run on edge devices with limited resources. This limitation hampers the deployment of advanced AI applications to tinyML platforms.

Hyperdimensional computing (HDC) offers a novel approach to represent and process data in high-dimensional spaces, inspired by the brain's functioning. By employing simple element-wise operations, HDC enables both inference and training tasks with significantly fewer computational resources compared to traditional models like convolutional neural networks or transformers. As such, HDC holds the potential to bridge the gap between resource-constrained edge hardware and sophisticated machine learning models.

Despite its potential, there is still ample room for further optimization in hyperdimensional computing solutions. Many existing HDC implementations either remain too computationally intensive for small hardware platforms or suffer from unacceptable performance degradation due to the optimizations. For this reason, a duo of researchers at the University of California San Diego have developed a novel HDC optimization approach called MicroHD. This accuracy-driven approach iteratively tunes HDC hyperparameters to reduce model complexity without sacrificing performance.

MicroHD works by systematically reducing memory and computational requirements while maintaining user-defined accuracy constraints. Unlike empirical approaches, MicroHD employs a methodical optimization methodology that involves a binary search of the hyperparameters space, scaling runtime requirements with workload complexity. By concurrently optimizing multiple HDC hyperparameters, MicroHD ensures efficient resource utilization across various HDC applications employing different encoding methods and input data.

This optimization process results in significant resource savings, up to 266 times compared to standard HDC implementations, with minimal accuracy loss (less than one percent in a series of experiments), making it a promising solution for deploying advanced machine learning models on edge computing devices.

In addition to moving advanced models out of the cloud and allowing them to run on less powerful hardware platforms, MicroHD also has the potential to slash energy use. This is a growing concern among AI adopters as the cost of running a cutting edge model can be stratospheric, not to mention the environmental impact of all that energy consumption. In conjunction with MicroHD, HDC might soon play a larger role in the world of AI.

Nick Bild
R&D, creativity, and building the next big thing you never knew you wanted are my specialties.
Latest articles
Sponsored articles
Related articles
Latest articles
Read more
Related articles