A Second USB Stick of Deep Learning

Back towards the middle of last year Intel launched the Movidius Neural Compute Stick, the world’s first deep learning processor on a USB…

Alasdair Allan
2 years ago

Back towards the middle of last year Intel launched the Movidius Neural Compute Stick, the world’s first deep learning processor on a USB stick.

The stick was based around the Movidius MA2150, a low-power VPU enabling you to add visual intelligence and machine learning capabilities in battery-powered products at the edge, without a connection to the network, or the need for a cloud backend.

Now it’s being joined by a second USB stick, which at least on the face of it, seems to sit in a similar niche—the Gyrfalcon Technology Laceli AI Compute Stick.

The arrival of these sorts of platforms could well be the start of a change about how we think about machine learning and how the Internet of Things might be built.

The ability to run trained networks “at the edge” nearer the data without the cloud — or in some cases even without even a network connection — could reduce barriers to developing, and deploying machine learning applications. It could potentially help make “smart objects” actually smart, rather than just network connected clients for machine learning algorithms running in remote data centres. Because now there is — at least the potential — to allow us to put the smarts on the smart device, rather than in the cloud.

Details around the new platform are still slim, however the company claims that their compute stick runs at “ …2.8 TOPS performance within 0.3 W of power, which is ×90 more efficient than the Movidius USB Stick,” whichruns at 100 gigaflops of performance within a 1W power envelope.

The power savings are supposed to come from the company’s APiM architecture—stands for “AI Processing in Memory”—which utilizes memory as the AI processing unit, eliminating movement of large amounts of data in and out of memory that normally result in high power consumption.

The new stick supports the Caffe, TensorFlow, and MXNet frameworks, although how the open source frameworks interact with the company’s own SDK are still to be made clear.

There is also no pricing information available yet, but the new deep learning stick will be shown in public for the first time at CES 2018 in Las Vegas next week. So if you’re at the show, and want to know more, you should check it out.

Alasdair Allan
Scientist, author, hacker, maker, and journalist. Building, breaking, and writing. For hire. You can reach me at 📫 alasdair@babilim.co.uk.
Related articles
Sponsored articles
Related articles