NVIDIA Is Being Pushed to the Edge

A collaboration between NVIDIA and Edge Impulse is increasing the accuracy of edge ML models deployed on resource-constrained devices.

Nick Bild
2 months agoMachine Learning & AI
Computer vision on edge hardware is maturing rapidly (📷: Edge Impulse)

As NVIDIA’s GTC 2024 continues this week, the big announcements keep rolling in. We recently reported on the new Blackwell architecture, which enables more practical training and inferencing of massive trillion-parameter machine learning (ML) models. But we would be remiss if we did not also mention the updates at the other end of the spectrum — edge ML. A sledgehammer is not the right tool for every task, after all.

Edge ML allows us to run models directly on devices at the network's edge, such as smartphones, sensors, IoT devices, and other embedded systems. This approach enables data processing and analysis to occur locally on the device itself, rather than relying on centralized servers or cloud infrastructure. The significance of edge ML lies in its ability to address the limitations of traditional techniques that heavily rely on large, remote clusters of powerful computers and GPUs.

These edge techniques enhance privacy and security by keeping sensitive data localized and reducing the risks associated with transmitting data over networks. With the increasing concerns surrounding data privacy and regulations like GDPR and CCPA, organizations are increasingly being compelled to prioritize data protection. Edge ML is also particularly crucial in applications where instantaneous responses are necessary, such as autonomous vehicles, industrial automation, and healthcare monitoring systems. In scenarios like these, even milliseconds of delay can have significant consequences, making edge inferencing indispensable for achieving acceptable performance.

At the conference, a collaboration between NVIDIA and Edge Impulse was announced that promises to help edge ML applications mature. The Edge Impulse platform is tailored to building and deploying machine learning models to edge devices. And with their newly released integration with NVIDIA’s TAO and Omniverse, those models will be more accurate and efficient than ever, greatly expanding the number of use cases for edge ML.

Using the NVIDIA TAO Toolkit, developers can create powerful, customized, production-ready computer vision applications. In the past, these applications would need to be deployed to expensive, energy-hungry computing equipment that is not well-suited for portable applications where privacy and speed are required. But now, models trained with TAO can be fine-tuned and deployed using the Edge Impulse platform. And with the optimization tools that are available, these models can be deployed to the tiniest of platforms — even those powered by Arm Cortex-based microcontrollers.

Of course an ML algorithm is only as good as the data it was trained on, so Edge Impulse has also integrated NVIDIA’s Omniverse into their workflow. Omniverse allows organizations to quickly generate large amounts of high-quality synthetic image data. This is especially important where obtaining real-world data is costly, time-consuming, or creates privacy concerns. As data collection can be a major drain on resources, this new feature promises to greatly accelerate time to market for production models.

Taken together, these enhancements will allow users to rapidly create professional-grade industrial ML models that can run on heavily resource-constrained devices. That will open up a new world of possibilities for edge ML, from the visual inspection of manufacturing production lines to detect defects and equipment malfunctions, to surgery inventory object detection to prevent postoperative complications.

Nick Bild
R&D, creativity, and building the next big thing you never knew you wanted are my specialties.
Latest articles
Sponsored articles
Related articles
Latest articles
Read more
Related articles