This guide provides detailed instructions for targeting the Xilinx Vitis-AI 1.3 flow to the UltraZed-EV SOM (7EV), FMC Carrier Card, and Multi-Camera FMC.
This guide will describe how to download and install the pre-built SD card images, and execute the AI applications on the hardware.
Design OverviewThe following block diagram illustrates the hardware design included in the pre-built image.
The pre-built SD card image includes a hardware design built with Vitis with the following DPU configurations:
- uz7ev_evcc_quadcam : 1 x B4096 (low RAM usage), 300MHz/600MHz
Note that the B4096 (low RAM usage) configuration is the same as on the ZCU102 & ZCU104 pre-built images from Xilinx.
The following images capture the resource utilization with and without the DPU, in the form of resource utilization.
The following images capture the resource utilization with and without the DPU, in the form of resource placement.
The following image illustrates the quad capture pipeline for the multi-camera FMC.
The multi-camera FMC makes use of Gigabit Multimedia Serial Link (GMSL), in order to connect the cameras to the processing board. GMSL is widely used in the automotive industry for in-vehicle high speed communication of video streams. Making use of low-cost coax cable up to 15 meters in length, GMSL meets the most stringent electromagnetic compatibility (EMC) requirements of the automotive industry.
Each of the four camera modules are implemented with the ON Semiconductor MARS (Modular Automotive Reference System) based camera modules, including :
- ON Semiconductor AR0231AT image sensor board
- MAXIM Integrated MAX96705 GMSL Serializer board
The camera modules are connected to the processing system, using High-Speed FAKRA Mini (HFM) connectors :
- Quad-HFM to 4x FAKRA Cable Assembly
The FMC module implements a quad channel GMSL deserializer
- MAXIM Integrated MAX9286 Quad De-Serializer
The hardware design implemented in the PL includes the following components:
- MIPI CSI-2 RX receiver IP core
- AXI-Stream Switcher : splits the incoming composite video into four seperate pipelines
- Image Pipeline : implemented with Demosaic, Color-Space-Conversion, and Scaler
- Frame Buffer Write : the DMA engine implementing writes to external DDR
The HDMI Display Pipeline is implemented with the following two IP cores:
- Video Mixer : allowing up to 16 layers
- Frame Buffer Read : the DMA engine implementing reads from external DDR
The Video Mixer was implemented with 9 layers
- 1 x RGBA - one general graphics layer with alpha-blending capability
- 4 x YUV 4:2:2 - four YUV 4:2:2 layers
- 4 x BGR - four BGR layers, compatible with OpenCV, and Vitis-AI models
A pre-built SD card image has been provided for this design.
You will need to download the following pre-built SD card image:
- uz7ev_evcc_quadcam : http://avnet.me/avnet-uz7ev_evcc_quadcam-vitis-ai-1.3-image
(2021-02-04 - MD5SUM = da301108722766da4abb4302c4e4962b)
The SD card image contains the hardware design (BOOT.BIN, dpu.xclbin), as well as the petalinux images (boot.scr, image.ub, rootfs.tar.gz). It is provided in image (IMG) format, and contains two partitions:
- BOOT – partition of type FAT (size=400MB)
- ROOTFS – partition of type EXT4
The first BOOT partition was created with a size of 400MB, and contains the following files:
- BOOT.BIN
- boot.scr
- image.ub
- init.sh
- platform_desc.txt
- dpu.xclbin
- arch.json
The second ROOTFS partition contains the rootfs.tar.gz content, and is pre-installed with the Vitis-AI runtime packages, as well as the following directories:
- /home/root/dpu_sw_optimize
- /home/root/Vitis-AI, which includes
- pre-built VART samples
- pre-built Vitis-AI-Library samples
- /home/root/gst-tutorial, which includes
- source code for gstreamer plug-ins
- /home/root/scripts, which includes
- launch scripts for demos
Once downloaded, and extracted, the.img file can be programmed to a 16GB micro SD card.
0. Extract the archive to obtain the .img file
1. Program the board specific SD card image to a 16GB (or larger) micro SD card
a. On a Windows machine, use Balena Etcher or Win32DiskImager (free opensource software)
b. On a linux machine, use the dd utility
$ sudo dd bs=4M if=Avnet-uz7ev_evcc_quadcam-Vitis-AI-1-3-{date}.img of=/dev/sd{X} status=progress conv=fsync
Where {X} is a smaller case letter that specifies the device of your SD card. You can use “df -h” to determine which device corresponds to your SD card.
Step 2 - Execute the Quad Analytics applications on hardwareSome of the configuration steps only need to be performed once (after the first boot), including the following:
2. Boot the target board with the micro SD card that was create in the previous section
3. After boot, launch the dpu_sw_optimize.sh script
$ cd ~/dpu_sw_optimize/zynqmp
$ source ./zynqmp_dpu_optimize.sh
This script will perform the following steps:
- Auto resize SD card’s second (EXT4) partition
- QoS configuration for DDR memory
4. [Optional] Disable the dmesg verbose output:
$ dmesg -D
This can be re-enabled with the following:
$ dmesg -E
5. Validate the Vitis-AI runtime with the dexplorer utility.
For the uz7ev_evcc_quadcam target, this should correspond to the following output:
$ dexplorer –whoami
[DPU IP Spec]
IP Timestamp : 2020-11-02 15:15:00
DPU Core Count : 1
[DPU Core Configuration List]
DPU Core : #0
DPU Enabled : Yes
DPU Arch : B4096
DPU Target Version : v1.4.1
DPU Freqency : 300 MHz
Ram Usage : Low
DepthwiseConv : Enabled
DepthwiseConv+Relu6 : Enabled
Conv+Leakyrelu : Enabled
Conv+Relu6 : Enabled
Channel Augmentation : Enabled
Average Pool : Enabled
6. Change the resolution of the HDMI monitor to a lower resolution, such as 1280x720
$ modetest-hdmi -s 46@44:1280x720-60@AR24
7. Launch the Quad Face Detection application
$ cd ~/scripts
$ source ./launch_quad_facedetect.sh
8. Launch the Quad Analytics application
$ cd ~/scripts
$ source ./launch_quad_ai.sh
The previous application uses the gstreamer infrastructure.
If we look at the launch_quad_facedetect.sh script, it is performing the following:
- initialize capture pipelines for multi-camera FMC
- launch gstreamer pipelines
The gst-launch-1.0 is used to launch the gstreamer pipelines.
gst-launch-1.0 \
v4l2src device=/dev/video2 io-mode=4 ! \
video/x-raw, width=640, height=360, format=BGR, framerate=30/1 ! \
queue ! vaifacedetect ! queue ! \
fpsdisplaysink video-sink="kmssink bus-id=b0050000.v_mix plane-id=38 \
render-rectangle=\"<0,0,640,360>\"" sync=false fullscreen-overlay=true \
\
v4l2src device=/dev/video3 io-mode=4 ! \
video/x-raw, width=640, height=360, format=BGR, framerate=30/1 ! \
queue ! vaifacedetect ! queue ! \
fpsdisplaysink video-sink="kmssink bus-id=b0050000.v_mix plane-id=39 \
render-rectangle=\"<640,0,640,360>\"" sync=false fullscreen-overlay=true \
\
v4l2src device=/dev/video4 io-mode=4 ! \
video/x-raw, width=640, height=360, format=BGR, framerate=30/1 ! \
queue ! vaifacedetect ! queue ! \
fpsdisplaysink video-sink="kmssink bus-id=b0050000.v_mix plane-id=40 \
render-rectangle=\"<0,360,640,360>\"" sync=false fullscreen-overlay=true \
\
v4l2src device=/dev/video5 io-mode=4 ! \
video/x-raw, width=640, height=360, format=BGR, framerate=30/1 ! \
queue ! vaifacedetect ! queue ! \
fpsdisplaysink video-sink="kmssink bus-id=b0050000.v_mix plane-id=41 \
render-rectangle=\"<640,360,640,360>\"" sync=false fullscreen-overlay=true \
\
-v
Four distinct pipelines are launched, each with very similar descriptors.
The video source for pipeline 1 is specified with the following lines:
v4l2src device=/dev/video2 io-mode=4 ! \
video/x-raw, width=640, height=360, format=BGR, framerate=30/1 ! \
The video sink for pipeline 1 is sent to one of the video mixer planes on the HDMI output with the following lines:
fpsdisplaysink video-sink="kmssink bus-id=b0050000.v_mix plane-id=38 \
render-rectangle=\"<0,0,640,360>\"" sync=false fullscreen-overlay=true \
The face detection is defined in the middle of each pipeline with the following line:
queue ! vaifacedetect ! queue ! \
This line can be modified by the user to specify a different AI model.
The pre-built image includes gstreamer plug-ins for the following:
- facedetect (densebox_640_360)
- facelandmark (densebox_640_360 + landmark)
- facetracking (densebox_640_360 + centroid-based tracking)
- persondetect (ssd_)
- posedetect (ssd + spn)
The gstreamer plug-ins were cross-compiled on a linux machine, as described in the following project:
https://www.hackster.io/dsp2/creating-a-vitis-ai-gstreamer-plugin-for-the-ultra96-v2-616a79
Although the project was written for use with Vitis 1.2, the instructions also work with Vitis-AI 1.3, by simply replacing the following:
sdk-2020.1.0.0 => sdk-2020.2.0.0
https://www.xilinx.com/bin/public/openDownload?filename=sdk-2020.2.0.0.sh
vitis_ai_2020.1-r1.2.0.tar.gz => vitis_ai_2020.2-r1.3.0.tar.gz
https://www.xilinx.com/bin/public/openDownload?filename=vitis_ai_2020.2-r1.3.0.tar.gz
As addition reference, the source code for the gstreamer plug-ins included in the pre-built SD card image can be found at the following location:
/home/root/gst-tutorial
Appendix 2 – Rebuilding the DesignThis section describes how to re-build this design.
The DPU-enabled designs were built with Vitis. With this in mind, the first step is to create a Vitis platform, which can be done with a linux machine, which has the Vitis 2020.2 tools correctly installed.
The following commands will clone the Avnet “bdf”, “hdl”, “petalinux”, and “vitis” repositories, all needed to re-build the Vitis platforms:
git clone https://github.com/Avnet/bdf
git clone –b 2020.2 https://github.com/Avnet/hdl
git clone –b 2020.2 https://github.com/Avnet/petalinux
git clone –b 2020.2 https://github.com/Avnet/vitis
Then, from the “vitis” directory, run make and specify one of the following targets
- u96v2_sbc : will re-build the Vitis platform for the Ultra96-V2 Development Board
- uz7ev_evcc : will re-build the Vitis platform for the UltraZed-EV SOM (7EV) + FMC Carrier Card
- uz3eg_iocc : will re-build the Vitis platform for the UltraZed-EG SOM (3EG) + IO Carrier Card
Also specify which build steps you want to perform, in order:
- xsa : will re-build the Vivado project for the hardware design
- plnx : will re-build the petalinux project for the software
- sysroot : will re-build the root file system, used for cross-compilation on the host
- pfm : will re-build the Vitis platform
To rebuild the Vitis platform for the UltraZed-EV Quad-Camera platform, use the following commands:
cd vitis
make uz7ev_evcc_quadcam step=xsa
make uz7ev_evcc_quadcam step=plnx
make uz7ev_evcc_quadcam step=sysroot
make uz7ev_evcc_quadcam step=pfm
With the Vitis platform built, you can build the DPU-TRD, as follows:
make uz7ev_evcc_quadcam step=dpu
For reference, this build step performs the following:
- clone branch v1.3 of the Vitis-AI repository (if not done so already)
- copy the DPU-TRD to the projects directory, and rename it to {platform}_dpu
- copy the following three files from the vitis/app/dpu directory:
- Makefile : modified Makefile
- dpu_conf.vh : modified DPU configuration file specifying DPU architecture, etc…
- config_file/prj_config : modified configuration file specifying DPU clocks & connectivity - build design with make
This will create a SD card image in the following directory:
vitis/projects/uz7ev_evcc_quadcam_2020_2_dpu/prj/Vitis/binary_container_1/sd_card.img
This SD card image can be programed to the SD card, as described previously in this tutorial. However, it does not yet contain all the installed runtime packages and pre-compiled applications.
In order to complete the full installation, you will need to follow the instructions in the following sections of the Vitis-AI repository:
- Installing the DNNDK runtime
https://github.com/Xilinx/Vitis-AI/tree/v1.3/demo/DNNDK
NOTE : prior to installation, the DNNDK runtime's install.sh will need to be modified according to the location of the BOOT partition (ie. /media/sd-mmcblk1p1) - Installing the Vitis AI runtime v1.3 (for Edge), and examples
https://github.com/Xilinx/Vitis-AI/tree/v1.3/demo/VART - Installing the Vitis AI Library v1.3 (for Edge), and examples
https://github.com/Xilinx/Vitis-AI/tree/v1.3/demo/Vitis-AI-Library
NOTE : after installation, the /etc/vart.conf file will need to be modified according to the location of the dpu.xclbin file (ie. /media/sd-mmcblk1p1/dpu.xclbin)
With the DPU-TRD design built, you can compile the AI-Model-Zoo for this design, as follows:
make uz7ev_evcc_quadcam step=zoo
For reference, this build step performs the following:
- clone branch v1.3 of the Vitis-AI repository (if not done so already)
- *copy the models/AI-Model-Zoo to the projects directory, and rename it to uz7ev_evcc_quadcam_2020_2_zoo
- copy the following files from the vitis/app/zoo directory
- compile_modelzoo.sh : script to compile all models
In order to perform the actual compilation, perform the steps described below:
==================================================================
Instructions to build AI-Model-Zoo for uz7ev_evcc_quadcam_2020_2 platform:
==================================================================
cd projects/uz7ev_evcc_quadcam_2020_2_zoo/.
./docker_run.sh xilinx/vitis-ai:1.3.411
source ./compile_modelzoo.sh
==================================================================
Additional Information:
- to compile only one (or a few) models,
remove unwanted model sub-directories from model-list directory
==================================================================
This will create compiled models in the following directory:
vitis/projects/uz7ev_evcc_quadcam_2020_2_zoo/vitis_ai_library/models
Appendix 3 - Camera EnclosureFor those interested in the camera enclosures, these were 3D printed.
The 3D design files were created by a local company, who have graciously made them available for use:
- http://avnet.me/mars-camera-enclosure
(2021-02-12 - MD5SUM = 7b15d82dfcc42021bf8d4df984c0055d)
The archive contains the following content:
- assembly instructions
- STEP files for the front, back, and base of the enclosure
- STL files for the front, back, and base of the enclosure
The assembly instructions are provided in SVG format, and shown here for reference:
If you successfully 3D print this enclosure, please share your feedback in the comments below, including which 3D printer you used, and which software.
ConclusionI hope this tutorial, with its pre-built SD card image, will help you to get started quickly with Vitis-AI 1.3 on the UltraZed-EV Starter Kit and Multi-Camera FMC module.
If there is any other related content that you would like to see, please share your thoughts in the comments below.
Revision History2021/02/12 - Initial Version
AcknowledgementsThank you Tom Simpson for your excellent contributions.
- Creating a Vitis-AI GStreamer Plugin for the Ultra96-V2
- Machine learning example for the ZCU104 with FMC Quad-Camera module from Avnet
Thank you Kevin Keryk for generating the STL files for the camera enclosure, from the original STEP files.
Comments