Since the RzBoard/V2L benefits from hardware acceleration using the DRP-AI, edge impulse provides the drp-ai library that uses our C++ Edge Impulse SDK and models headers that run on the hardware accelerator. If you would like to integrate the model source code into your applications and benefit from the drp-ai then you need to select the drp-ai library.
Your trained ML model in the Edge Impulse Studio can be downloaded as a DRP-AI library. This library is provided to you as a C++ header-only that does not require any dependencies and can be integrated into your project and compiled into your application.
The library contains the model parameters, model weights that run on the drp-ai and the Edge Impulse SDK that contains the necessary function calls to the inferencing engine.
In order to benefit from the hardware acceleration provided by the RZBoard/V2L board we need to download the DRP-AI library from the deployment page. This will allow you to run your model efficiently on the RZBoard/V2L.
In the studio, we define the Impulse as a combination of any preprocessing code necessary to extract features from your raw data along with inference using your trained machine learning model. Similarly, in the drp-ai, we provide the library with raw data from your sensor, and it will return the output of your model. It performs feature extraction and inference, just as you configured in the Edge Impulse Studio!
A working demonstration of this project can be found here.
Edge Impulse Linux SDK for C++This library lets you run machine learning models and collect sensor data on Linux machines using C++. This SDK is part of Edge Impulse where we enable developers to create the next generation of intelligent device solutions with embedded machine learning. Start here to learn more and train your first model.
Installation guideIn this project, we assume you followed the instructions from this hackster project for building a yocto image for RzBoard and network booting it, and the Train & Deploy ML Model on RzBoard with Edge-Impulse project.
Hardware setup
From your Linux host machine run the following command:
bn@nc:~$ sudo su
root@nc:/home/bn# cd /nfs/rzv2l/home/root/
root@nc:/nfs/rzv2l/home/root#Clone this repository and initialize the submodules:
root@nc:/nfs/rzv2l/home/root# git clone https://github.com/edgeimpulse/example-standalone-inferencing-linux
root@nc:/nfs/rzv2l/home/root# cd example-standalone-inferencing-linux && git submodule update --init --recursiveUpdate the content of the folderUpdate the Makefile with the following commands
root@nc:/nfs/rzv2l/home/root/example-standalone-inferencing-linux# rm Makefile
root@nc:/nfs/rzv2l/home/root/example-standalone-inferencing-linux# curl -o Makefile https://github.com/bngabonz/RzBoard-Network-Boot/blob/main/MakefileUpdate the camera app .cpp filewith the following commands
root@nc:/nfs/rzv2l/home/root/example-standalone-inferencing-linux# cd source
oot@nc20:/nfs/rzv2l/home/root/example-standalone-inferencing-linux/source# ls
audio.cpp camera.cpp collect.cpp custom.cpp eim.cpp
root@nc20:/nfs/rzv2l/home/root/example-standalone-inferencing-linux/source# rm camera.cpp
root@nc20:/nfs/rzv2l/home/root/example-standalone-inferencing-linux/source# ls
audio.cpp collect.cpp custom.cpp eim.cpp
root@nc20:/nfs/rzv2l/home/root/example-standalone-inferencing-linux/source# curl -o camera.cpp https://github.com/bngabonz/RzBoard-Network-Boot/blob/main/camera.cpp
% Total % Received % Xferd Average Speed Time Time Time Current
Dload Upload Total Spent Left Speed
100 202k 0 202k 0 0 283k 0 --:--:-- --:--:-- --:--:-- 283k
root@nc20:/nfs/rzv2l/home/root/example-standalone-inferencing-linux/source# ls
audio.cpp camera.cpp collect.cpp custom.cpp eim.cppDownload the DRP-AI Library from Edge ImpulseYou need to download a DRP-AI library from your own project. If you use the public project, you will need to click Clone this project in the upper-right corner to clone the project to your own account.
Head to the Deployment page for your project. Select DRP-AI library. Scroll down, and click Build. Note that you must have a fully trained model in order to download any of the deployment options. Your impulse will download as a C++ library in a.zip file.
Unzip the drp-ai library and copy the folders into this repository using these commands
root@nc:~# cd /home/bn/Downloads/
root@nc:/home/bn/Downloads# unzip bngaboav-project-1-drp-ai-lib-v4.zip -d /nfs/rzv2l/home/root/example-standalone-inferencing-linux/Check the content of the folder.
root@nc:/home/bn/Downloads# ls /nfs/rzv2l/home/root/example-standalone-inferencing-linux/
build-opencv-linux.sh LICENSE tensorflow-lite
build-opencv-mac.sh Makefile tflite
edge-impulse-sdk model-parameters tflite-model
inc README.md third_party
ingestion-sdk-c source tidl-rt
root@nc:/home/bn/Downloads# exit
exit
bn@nc:~$At this stage, you can have a look inside the tflite-model/ directory. You will see the drp-ai model that is going to be used by the Edge Impulse SDK to run this model on the hardware accelerator. If you do not find the file called drpai_model.h then you have probably downloaded the C++ library or your build was not successful and it fell back to the normal library.
Having followed the hackster project to network boot RZboard, and having the same hardware setup as the Train & Deploy ML Model on RzBoard with Edge-Impulse project.
Form your Linux host connect via serial connection:
- In the first console run
bn@nc:~$ sudo chmod 666 /dev/ttyUSB0
bn@nc:~$ cu -s 115200 -l /dev/ttyUSB0 --parity none --nostop
Connected.- Open another console on Ubuntu PC and change RTS/CTS flow-control option:
bn@nc:~$ stty -F /dev/ttyUSB0 -crtsctsOn the RZBoard, press and hold the S1 button to power on the RZ/V2L.Verify that the U-boot/Linux boot messages display via serial cable.
- Return to the first console then log in:
bn@nc:~$ cu -s 115200 -l /dev/ttyUSB0 --parity none --nostop
Connected.
.
.
.
Poky (Yocto Project Reference Distro) 3.1.14 rzboard ttySC0
rzboard login: root
Password: avnetBuilding .eim filesOnce you have logged in to the board, please run the following command to build Edge Impulse for Linux models (eim files) that can be used by the Python, Node.js or Go SDKs built with APP_EIM=1:
root@rzboard:~# cd example-standalone-inferencing-linux
root@rzboard:~/example-standalone-inferencing-linux# APP_EIM=1 TARGET_RENESAS_RZV2L=1 make -j2The model will be placed in build/model.eim and can be used directly by your application.
Build the application with the following command: APP_CAMERA=1 TARGET_RENESAS_RZV2L=1 make -j2
and check if the binary is in the build directory.
root@rzboard:~/rzproject/example-standalone-inferencing-linux-rps# APP_CAMERA=1 TARGET_RENESAS_RZV2L=1 make -j2
root@rzboard:~/rzproject/example-standalone-inferencing-linux-rps# cd build
root@rzboard:~/rzproject/example-standalone-inferencing-linux-rps/build# ls
camera custom model.eimRun the application with the camera's ID to perform inference.
root@rzboar:~/rzproject/example-standalone-inferencing-linux-rps/build# ./camera
Requires one parameter (ID of the webcam).
You can find these via `v4l2-ctl --list-devices`.
E.g. for:
C922 Pro Stream Webcam (usb-70090000.xusb-2.1):
/dev/video0
The ID of the webcam is 0
root@rzboar:~/rzproject/example-standalone-inferencing-linux-rps/build# ./camera 0
[ 1720.126881] usb 2-1.4: reset high-speed USB device number 3 using ehci-platform
16 ms. paper: 0.93555, rock: 0.06342, scissors: 0.00099
7 ms. paper: 0.93555, rock: 0.06342, scissors: 0.00099
7 ms. paper: 0.93555, rock: 0.06342, scissors: 0.00099
7 ms. paper: 0.93555, rock: 0.06342, scissors: 0.00099








Comments