Today we are excited to announce our foray into embedded Linux with official support for the Raspberry Pi 4!
Now, users of Edge Impulse can select the right processor class for their embedded machine learning applications. Leverage our existing best-in-class support for low-power MCUs or venture into processor classes that run embedded Linux if highest performance is the objective.
We’ve brought the same great user experience our developers are already familiar with into the Linux domain (using full hardware acceleration on the Pi 4), with a refreshed set of tools and capabilities that makes deploying embedded machine learning models on Linux as easy as… Pi. 😊
In addition, we are also thrilled to launch support for true object detection as part of our computer vision ML pipeline! Use a Raspberry Pi camera or plug in a standard USB web camera into one of the available USB slots on the Pi, and harness the raw power of higher performance compute and more sophisticated frameworks and libraries to facilitate computer vision applications.
For audio applications, plug a standard USB microphone into one of the available USB slots on the Pi. For sensor fusion, the 40-pin GPIO header on the Pi can be employed to connect to your favorite sensors as well.
The best way to get started is by going through our Raspberry Pi 4 guide and experiencing the enhanced user workflow for Linux. Then, easily train an object detection model with the help of our written tutorial. We've even put together a nice walkthrough video tutorial below.
SDKs for Python, Node.js, Go, and C++ are provided so you can easily build your own custom apps for inferencing. Here is an example using our Node.js Linux SDK that sends a text message using Twilio if a person and elephant are seen in the same frame.
We’d love to hear from you on our forum about what you think and can’t wait to see how you plan on unleashing the combined power of Edge Impulse and Raspberry Pi in your embedded machine learning applications!