Please note that there is a more recent version of this project:
Vitis-AI 1.4 Flow for Avnet VITIS Platforms
IntroductionThis guide provides detailed instructions for targeting the Xilinx Vitis-AI 1.2 flow to the following Avnet Vitis 2020.1 platforms:
- Ultra96-V2 Development Board
- UltraZed-EV SOM (7EV) + FMC Carrier Card
- UltraZed-EG SOM (3EG) + IO Carrier Card
- UltraZed-EG SOM (3EG) + PCIEC Carrier Card
This guide will describe how to download and install the pre-built SD card images, and execute the AI applications on the hardware.
IMPORTANT NOTE : The Ultra96-V2 Development Board requires a PMIC firmware update. See section "Known Issues - Ultra96-V2 PMIC firmware update" below for more details.
Step 1 - Create the SD cardPre-built SD card images have been provided for the following Avnet platforms:
- ULTRA96V2 : Ultra96-V2 Development Board
- UZ7EV_EVCC : UltraZed-EV SOM (7EV) + FMC Carrier Card
- UZ3EG_IOCC : UltraZed-EG SOM (3EG) + IO Carrier Card
- UZ3EG_PCIEC : UltraZed-EG SOM (3EG) + PCIEC Carrier Card
The pre-built images include hardware designs built with Vitis with the following DPU configurations:
- ULTRA96-V2 : 1 x B2304 (low RAM usage, low DSP48 usage), 200MHz/400MHz
- UZ7EV_EVCC : 2 x B4096 (high RAM usage, high DSP48 usage), 300MHz/600MHz
- UZ3EG_IOCC : 1 x B2304 (low RAM usage, low DSP48 usage), 150MHz/300MHz
- UZ3EG_PCIEC : 1 x B2304 (low RAM usage, low DSP48 usage), 150MHz/300MHz
The pre-built images include compiled models for the following two distinct configurations:
- B2304_lr : B2304 DPU with low RAM usage
- B4096_hr : B4096 DPU with high RAM usage
Note that the B4096_hr configuration is the same as on the ZCU104 pre-built image from Xilinx.
You will need to download one of the following pre-built SD card images:
- ULTRA96V2 : http://avnet.me/avnet-ultra96v2-vitis-ai-1.2-image (2020-10-22 - MD5SUM = def057a41d72ee460334435234c4264e)
- UZ7EV_EVCC : http://avnet.me/avnet-uz7ev-evcc-vitis-ai-1.2-image (2020-10-22 - MD5SUM = 0cca2bad952633fea7815dec12838137)
- UZ3EG_IOCC : http://avnet.me/avnet-uz3eg-iocc-vitis-ai-1.2-image (2020-10-22 - MD5SUM = 596bdd1a9e3598e4fb00e8cbf0567f7e)
- UZ3EG_PCIEC : http://avnet.me/avnet-uz3eg-pciec-vitis-ai-1.2-image (2020-10-22 - MD5SUM = 87f2336927be4b9e439c8ab9143527ac)
Each board specific SD card image contains the hardware design (BOOT.BIN, dpu.xclbin), as well as the petalinux images (boot.scr, image.ub, rootfs.tar.gz). It is provided in image (IMG) format, and contains two partitions:
- BOOT – partition of type FAT (size=400MB)
- ROOTFS – partition of type EXT4
The first BOOT partition was created with a size of 400MB, and contains the following files:
- BOOT.BIN
- boot.scr
- image.ub
- init.sh
- platform_desc.txt
- dpu.xclbin
- {platform}.hwh
The second ROOTFS partition contains the rootfs.tar.gz content, and is pre-installed with the Vitis-AI runtime packages, as well as the following directories:
- /home/root/dpu_sw_optimize
- /home/root/Vitis-AI, which includes
- pre-built DNNDK samples
- pre-built VART samples
- pre-built Vitis-AI-Library samples
1. Program the board specific SD card image to a 16GB micro SD card (preferred) or 8GB micro SD card
a. On a Windows machine, use Balena Etcher or Win32DiskImager (free opensource software)
b. On a linux machine, use the dd utility
$ sudo dd bs=4M if=Avnet-{platform}-Vitis-AI-1-2-{date}.img of=/dev/sd{X} status=progress conv=fsync
Where {X} is a smaller case letter that specifies the device of your SD card. You can use “df -h” to determine which device corresponds to your SD card.
Step 2 - Execute the AI applications on hardwareSome of the configuration steps only need to be performed once (after the first boot), including the following:
2. Boot the target board with the micro SD card that was create in the previous section
3. After boot, launch the dpu_sw_optimize.sh script
$ cd ~/dpu_sw_optimize/zynqmp
$ source ./zynqmp_dpu_optimize.sh
This script will perform the following steps:
- Auto resize SD card’s second (EXT4) partition
- QoS configuration for DDR memory
4. [Optionnal] Disable the dmesg verbose output:
$ dmesg -D
This can be re-enabled with the following:
$ dmesg -E
5. Validate the Vitis-AI runtime with the dexplorer utility.
For the ULTRA96V2, UZ3EG_IOCC, and UZ3EG_PCIEC targets, this should correspond to the following output:
d$ dexplorer --whoami
[DPU IP Spec]
IP Timestamp : 2020-06-18 12:00:00
DPU Core Count : 1
[DPU Core Configuration List]
DPU Core : #0
DPU Enabled : Yes
DPU Arch : B2304
DPU Target Version : v1.4.1
DPU Freqency : 300 MHz
Ram Usage : Low
DepthwiseConv : Enabled
DepthwiseConv+Relu6 : Enabled
Conv+Leakyrelu : Enabled
Conv+Relu6 : Enabled
Channel Augmentation : Enabled
Average Pool : Enabled
For the UZ7EV_EVCC target, this should correspond to the following output:
$ dexplorer --whoami
[DPU IP Spec]
IP Timestamp : 2020-06-18 12:00:00
DPU Core Count : 2
[DPU Core Configuration List]
DPU Core : #0
DPU Enabled : Yes
DPU Arch : B4096
DPU Target Version : v1.4.1
DPU Freqency : 300 MHz
Ram Usage : High
DepthwiseConv : Enabled
DepthwiseConv+Relu6 : Enabled
Conv+Leakyrelu : Enabled
Conv+Relu6 : Enabled
Channel Augmentation : Enabled
Average Pool : Enabled
DPU Core : #1
DPU Enabled : Yes
DPU Arch : B4096
DPU Target Version : v1.4.1
DPU Freqency : 300 MHz
Ram Usage : High
DepthwiseConv : Enabled
DepthwiseConv+Relu6 : Enabled
Conv+Leakyrelu : Enabled
Conv+Relu6 : Enabled
Channel Augmentation : Enabled
Average Pool : Enabled
[DPU Extension List]
Extension Softmax
Enabled : Yes
6. Define the DISPLAY environment variable
$ export DISPLAY=:0.0
7. Change the resolution of the DP monitor to a lower resolution, such as 640x480
$ xrandr --output DP-1 --mode 640x480
8. Launch the VART based sample applications
a. Launch the adas_detection application
$ cd ~/Vitis-AI/VART/samples/adas_detection
$ ./adas_detection ./video/adas.avi ./model_dir_for_{config}/ yolov3_adas_pruned_0_9.elf
Where {config} is either B2304_lr or B4096_hr, depending on your platform.
b. Launch the pose_detection application
$ cd ~/Vitis-AI/VART/samples/pose_detection
$ ./pose_detection ./video/pose.mp4 ./model_dir_for_{config}/sp_net.elf ./model_dir_for_{config}/ssd_pedestrian_pruned_0_97.elf
Where {config} is either B2304_lr or B4096_hr, depending on your platform.
NOTE : The pose_detection is currently only working on the UZ7EV_EVCC platform. There is an issue compiling the SPnet model. As soon as this is resolved, the tutorial will be updated.
c. Launch the caffe version of the resnet50 application
$ cd ~/Vitis-AI/VART/samples/resnet50
$ ./resnet50 ./model_dir_for_{config}/resnet50.elf
Where {config} is either B2304_lr or B4096_hr, depending on your platform.
d. Launch the segmentation application
$ cd ~/Vitis-AI/VART/samples/segmentation
$ ./segmentation ./video/traffic.mp4 ./model_dir_for_{config}/fpn.elf
Where {config} is either B2304_lr or B4096_hr, depending on your platform.
e. Launch the video_analysis application
$ cd ~/Vitis-AI/VART/samples/video_analysis
$ ./video_analysis ./video/structure.mp4 ./model_dir_for_{config}/ssd_traffic_pruned_0_9.elf
Where {config} is either B2304_lr or B4096_hr, depending on your platform.
For the Vitis-AI-Library applications, refer to each sample directory’s “readme” file for details on how to execute the applications.
9. Launch the Vitis-AI-Library based sample applications
a. Launch the face_detect application with both variants of the densebox model (specify “0” as second argument, to specify the USB camera)
$ cd ~/Vitis-AI/vitis_ai_library/samples/facedetect
$ ./test_video_facedetect densebox_640_360 0
$ ./test_video_facedetect densebox_320_320 0
b. Compare the performance of each variant of the densebox models
$ ./test_performance_facedetect densebox_640_360 ./test_performance_facedetect.list
$ ./test_performance_facedetect densebox_320_320 ./test_performance_facedetect.list
Experiment with the other models, as well as the multi-model examples in the “~/Vitis-AI/vitis_ai_library/demo” directory.
10. The segmentation and roadline detection demo can be run in DRM mode or in GUI mode. In GUI mode, the demo will make use of matchbox, where only window will be visible at a time, but selectable in the GUI. In DRM mode, matchbox will be disabled, and output send directly to the DRM driver.
NOTE : The segs_and_lanedetect_detect demo is currently only working on the UZ7EV_EVCC platform. There is an issue compiling the VPGnet model. As soon as this is resolved, the tutorial will be updated.
a. To run the segmentation and roadline detection demo in GUI mode
$ cd ~/Vitis-AI/vitis_ai_library/demo/segs_and_roadline_detect
$ ./segs_and_lanedetect_detect_x seg_512_288.avi seg_512_288.avi seg_512_288.avi seg_512_288.avi lane_640_480.avi -t 2 -t 2 -t 2 -t 2 -t 2
The output that is visible can be selected from the GUI, as shown here:
b. To run the segmentation and roadline detection demo in DRM mode
$ cd ~/Vitis-AI/vitis_ai_library/demo/segs_and_roadline_detect
$ ./segs_and_lanedetect_detect_drm seg_512_288.avi seg_512_288.avi seg_512_288.avi seg_512_288.avi lane_640_480.avi -t 2 -t 2 -t 2 -t 2 -t 2
11. The segmentation and pose estimation demo can be run in DRM mode or in GUI mode. In GUI mode, the demo will make use of matchbox, where only window will be visible at a time, but selectable in the GUI. In DRM mode, matchbox will be disabled, and output send directly to the DRM driver.
NOTE : The seg_and_pose_detect demo is currently only working on the UZ7EV_EVCC platform. There is an issue compiling the SPnet model. As soon as this is resolved, the tutorial will be updated.
a. To run the segmentation and pose estimation demo in GUI mode
$ cd ~/Vitis-AI/vitis_ai_library/demo/seg_and_pose_detect
$ ./seg_and_pose_detect_x seg_960_540.avi 0 -t 3 -t 3
b. To run the segmentation and pose estimation demo in DRM mode
$ cd ~/Vitis-AI/vitis_ai_library/demo/seg_and_pose_detect
$ ./seg_and_pose_detect_drm seg_960_540.avi 0 -t 3 -t 3
This section is optional, for those seeking to modify the provided examples.
In this webinar (http://avnet.me/ultra96-getting-started), I talked about modifying the Vitis-AI 1.1 examples. The discussion still applies to Vitis-AI 1.2, so I found it useful to cover how to modify the Vitis-AI-Library examples in Vitis-AI 1.2.
We will be modifying the facedetect sample from the ~/Vitis-AI/vitis_ai_library/samples directory. If we look at the test_video_facedetect.cpp source code, we can see that it is surprisingly small:
int main(int argc, char *argv[]) {
string model = argv[1];
return vitis::ai::main_for_video_demo(
argc, argv,
[model] {
return vitis::ai::FaceDetect::create(model);
},
process_result, 2);
}
A visual representation of this code is shown in the following diagram:
We can see that the main function makes use of a generic main_for_video_demo() function, and passes it an instance of the FaceDetect class that provides create() and run() methods, as well as a process_results() function.
We can make use of this generic main_for_video_demo(), with a custom class that defines our modified use case(s), as shown in the following diagram:
As an example, I have provided two examples of modifying the facedetect example
- Adding face landmarks
- Adding simple centroid based tracking
In order to draw landmarks on the detected faces, we will use the face_landmark model, which provides 5 landmark points representing the two eyes, the nose, and the two corners of the mouth.
The following diagram illustrates the modified code for this example.
The modified code can be found in the following location:
~/Vitis-AI/vitis_ai_library/samples/facedetectwithlandmark/test_video_facedetectwithlandmark.cpp
12. To run the face detection with landmark application
$ cd ~/Vitis-AI/vitis_ai_library/samples/facedetectwithlandmark
$ ./test_video_facedetectwithlandmark 0
For the tracking example, I have reused the following code:
Centroid based tracking:
- Adrian Rosebrock, Simple Object Tracking with OpenCV, PyImageSearch, https://www.pyimagesearch.com/2018/07/23/simple-object-tracking-with-opencv/
- C++ version, by Pratheek Balakrishnahttps://github.com/prat96/Centroid-Object-Tracking
The simple centroid based tracking works as illustrated in the following figures:
- centroids are calculated for each detected face ROI
- each newly detected centroid is assigned a unique ID
- the smallest distance is used as criteria to match a centroid from one frame to the next
This algorithm has limitations, specifically when detections disappear and re-appear from one frame to the next. In this case, the IDs can bounce between different faces. This is known as identity switching, and is a metric for evaluating tracking algorithms. Although this algorithm does not score well for the identity switching metric, it is still a useful example of how to modify the face detection application.
The following diagram illustrates the modified code for this example.
The modified code can be found in the following location:
~/Vitis-AI/vitis_ai_library/samples/facedetectwithtracking/test_video_facedetectwithtracking.cpp
13. To run the face detection with tracking application
$ cd ~/Vitis-AI/vitis_ai_library/samples/facedetectwithtracking
$ ./test_video_facedetectwithtracking 0
For the case of the Ultra96-V2 Development Board, an important PMIC firmware update is required to run all of the AI applications.
Without the PMIC firmware update, the following AI applications will cause periodic peak current that exceeds the default 4A fault threshold, causing the power on reset to assert, and thus the board to reboot.
- adas_detection
- inception_v1_mt
- resnet50_mt
- segmentation
- video_analysis
The PMIC firmware update increases this fault threshold, and prevents the reboot from occurring.
In order to update the PMIC firmware of your Ultra96-V2 development board, refer to the following instructions:
If you are unable to update the PMIC firmware on your Ultra96-V2, but still want to run all of the AI applications, you can make use of the following script (from Xilinx) to reduce the frequency of the DPU:
This script will reduce the frequency of the PL_CLK0 (100MHz) clock source that feeds the Clock Wizard that generates the multiple clock frequencies, available for Vitis.
The recommended setting for the Ultra96-V2 without PMIC firmware update is to reduce the DPU frequencies down to 120MHz/240MHz.
If you have built the design for 150MHz/300MHz, you should use a value of 80%.
If you have built the design for 200MHz/400MHz (ie. pre-built solution), you should use a value of 60%.
Execute the following commands after boot:
$ dpu_clk
Real PL0_CLK 100000000
DPU Performance 100.0%
$ dpu_clk 60
$ dpu_clk
Real PL0_CLK 60000000
DPU Performance 60.0%
This will set the PL_CLK0 frequency to 65MHz, and thus the DPU frequencies to 60% of their original values.
Known Issues – AI model package for B2304_lrAs of this writing, there was no “AI model package” available for the B2304 (low RAM usage) DPU configuration. I have attempted to create this myself, with partial success.
The following table captures the status of my compilation effort for the caffe models.
The following table captures the status of my compilation effort for the tensorflow models.
Some of the deploy.prototxt files had to be modified for the model to successfully compile. The modifications for these models are the same as I described in this Vitis-AI 1.1 (part 1) tutorial:
http://avnet.me/vitis-ai-1.1-project-part1
Appendix 1 - Compile the Models from the Xilinx Model ZooThe Xilinx Model Zoo is a repository of free pre-trained deep learning models, optimized for inference deployment on Xilinx™ platforms.
This project will concentrate on the models for which examples applications have been provided. It is important to know the correlation between model and application. This table includes a non-exhaustive list of application that were verified with corresponding models from the model zoo.
1. The first step, if not done so already, is to clone the “v1.2.1” branch of the Vitis-AI repository:
$ git clone -b v1.2.1 https://github.com/Xilinx/Vitis-AI
$ cd Vitis-AI
$ export VITIS_AI_HOME=$PWD
2. The second step is to download the pre-trained models from the Xilinx Model Zoo:
$ cd $VITIS_AI_HOME/AI-Model-Zoo
$ source ./get_model.sh
3. This will download version 1.2 of the model zoo archive ( all_models_1.2.zip ), and extract to the models directory
{$VITIS_AI_HOME}/AI-Model-Zoo/models/all_models_1.2
4. Create a working directory called “avnet” (or other), and copy to it the hardware handoff files (.hwh) for the platform you wish to compiel the models
$ mkdir avnet
$ cd avnet
$ cp {path_to_hwh}/{platform}.hwh .
5. Launch the Vitis-AI docker container
5.1 If not done so already, pull version 1.2.82 of the docker container with the following command:
docker pull xilinx/vitis-ai:1.2.82
5.2 Launch version 1.2.82 of the Vitis-AI docker from the Vitis-AI directory:
$ cd $VITIS_AI_HOME
$ sh -x docker_run.sh xilinx/vitis-ai:1.2.82
6. When prompted,
Read all the license notification messages, and press ENTER to accept the license terms.
7. Within the docker session, launch the "vitis-ai-caffe" Conda environment
$ conda activate vitis-ai-caffe
(vitis-ai-caffe) $
8. Navigate to the working directory we created earlier
$ cd AI-Model-Zoo/avnet
9. Use the dlet tool to generate your .dcf file
(vitis-ai-caffe) $ dlet -f {platform}.hwh
10. The previous step will generate a dcf with a name similar to dpu-06-18-2020-12-00.dcf.
Rename this file to {platform}.dcf
(vitis-ai-caffe) $ mv dpu*.dcf {platform}.dcf
11. Create a file named “{platform}.json” with the following content
{"target": "DPUCZDX8G", "dcf": "./{platform}.dcf", "cpu_arch": "arm64"}
Where {platform} is the name of your targeted platform (ie. ULTRA96V2)
12. Create a directory for the compiled models
(vitis-ai-caffe) $ mkdir compiled_output_{platform}
13. Create a generic recipe for compiling a caffe model, by creating a script named “compile_cf_model.sh” with the following content
platform=$1
model_name=$2
modelzoo_name=$3
vai_c_caffe \
--prototxt ../models/all_models_1.2/caffe/${modelzoo_name}/quantized/Edge/deploy.prototxt \
--caffemodel ../models/all_models_1.2/caffe/${modelzoo_name}/quantized/Edge/deploy.caffemodel \
--arch ./{platform}.json \
--output_dir ./compiled_output_{platform}/${model_name} \
--net_name ${model_name} \
--options "{'mode': 'normal'}"
14. To compile the caffe model used by the resnet50 application for the ULTRA96V2 platform, invoke the generic script we just created as follows:
$ conda activate vitis-ai-caffe
(vitis-ai-caffe) $
source ./compile_cf_model.sh ULTRA96V2 resnet50 cf_resnet50_imagenet_224_224_7.7G
15. To compile the caffe model used by the face_detection application for the ULTRA96V2 platform, invoke the generic script we just created as follows:
$ conda activate vitis-ai-caffe
(vitis-ai-caffe) $
source ./compile_cf_model.sh ULTRA96V2 densebox cf_densebox_wider_360_640_1.11G_1.2
16. Create a generic recipe for compiling a tensorflow model, by creating a script called “compile_tf_model.sh” with the following content
platform=$1
category=$2
model_name=$3
modelzoo_name=$4
vai_c_tensorflow \
--frozen_pb ../models/ all_models_1.2/${category}/${modelzoo_name}/quantized/deploy_model.pb \
--arch ./${platform}.json \
--output_dir ./compiled_output_${platform}/${model_name} \
--net_name ${model_name} \
--options "{'save_kernel': ''}"
17. To compile the tensorflow model used by the resnet50 application for the ULTRA96V2 platform, invoke the generic script we just created as follows:
$ conda activate vitis-ai-tensorflow
(vitis-ai-tensorflow) $
source ./compile_tf_model.sh ULTRA96V2 classification tf_resnet50 tf_resnetv1_50_imagenet_224_224_6.97G_1.2
18. Verify the contents of the directory with the tree utility:
(vitis-ai-caffe) $$ tree
├── compiled_output_{platform}
│ ├── densebox
│ │ ├── densebox_kernel_graph.gv
│ │ └── dpu_densebox.elf
│ ├── resnet50
│ │ ├── dpu_resnet50_0.elf
│ │ └── resnet50_kernel_graph.gv
│ ├── tf_resnet50
│ ├── dpu_tf_resnet50_0.elf
│ ├── tf_resnet50_kernel.info
│ └── tf_resnet50_kernel_graph.gv
├── compile_cf_model.sh
├── compile_tf_model.sh
├── {platform}.dcf
├── {platform}.hwh
└── {platform}.json
6 directories, 15 files
19. Exit the tools docker
(vitis-ai-caffe) $ exit
Appendix 2 – Rebuilding the DesignThis section describes how to re-build this design.
The DPU-enabled designs were built with Vitis.
With this in mind, the first step is to create a Vitis platform, which can be done with a linux machine, which has the Vitis 2020.1 tools correctly installed.
Additionally, a patch is required to fix an know issue with the sd_card.img creation functionality of Vitis. Please refer to section “Known Issues – Vitis SD Card creation issue” for instructions on installing the fix for this issue.
The following commands will clone the Avnet “bdf”, “hdl”, “petalinux”, and “vitis” repositories, all needed to re-build the Vitis platforms:
git clone https://github.com/Avnet/bdf
git clone –b 2020.1 https://github.com/Avnet/hdl
git clone –b 2020.1 https://github.com/Avnet/petalinux
git clone –b 2020.1 https://github.com/Avnet/vitis
Then, from the “vitis” directory, run make and specify one of the following targets
- ultra96v2_oob : will re-build the Vitis platform for ULTRA96V2
- UZ7EV_EVCC : will re-build the Vitis platform for UZ7EV_EVCC
- UZ3EG_IOCC : will re-build the Vitis platform for UZ3EG_IOCC
- UZ3EG_PCIEC : will re-build the Vitis platform for UZ3EG_PCIEC
Also specify which build steps you want to perform, in order:
- xsa : will re-build the Vivado project for the hardware design
- plnx : will re-build the petalinux project for the software
- sysroot : will re-build the root file system, used for cross-compilation on the host
- pfm : will re-build the Vitis platform
As an example, to rebuild the Vitis platform for the Ultra96-V2, use the following commands:
cd vitis
make ultra96v2_oob ‘step=xsa plnx sysroot pfm’
With the Vitis platform built, you can build the DPU-TRD, as follows:
make ultra96v2_oob ‘step=dpu’
For reference, this build step performs the following:
- clone branch v1.2.1 of the Vitis-AI repository
- copy the DPU-TRD to the build directory, and rename it to DPU-TRD-{target}
- copy the following three files from the vitis/{target}/DPU-TRD directory
- Makefile : modified Makefile
- dpu_conf.vh : modified DPU configuration
- config_file/prj_config : modified configuration file specifying DPU clocks & connectivity - build design with make
This will create a SD card image in the following directory:
vitis/build/DPU-TRD-{target}/prj/Vitis/binary_container_1/sd_card.img
This SD card image can be programed to the SD card, as described previously in this tutorial. However, it does not yet contain all the installed runtime packages and pre-compiled applications.
In order to complete the full installation, you will need to follow the instructions in the following sections of the Vitis-AI repository:
- Installing the DNNDK runtime, and examples
https://github.com/Xilinx/Vitis-AI/tree/v1.2.1/mpsoc - Installing the Vitis AI runtime v1.2 (for Edge)
https://github.com/Xilinx/Vitis-AI/tree/v1.2.1/VART - Installing the Vitis AI Library v1.2 (for Edge)
https://github.com/Xilinx/Vitis-AI/tree/v1.2.1/Vitis-AI-Library
By default, Vitis will always create an SD card with two fixed-size partitions:
- BOOT (FAT) – of size 1GB
- ROOTFS (EXT4) – of size 2GB
This will work as long as the root file system does not exceed 2GB. In our case, the Ultra96-V2’s root file system is slightly larger than 2GB, so the patch is required for this target.
The following Vitis In-Depth Tutorial identifies this issue, and provides a patch to fix the issue:
The patch consists of replacing the default mkfsImage.sh script located here:
${XILINX_VITIS}/scripts/vitis/util/mkfsImage.sh
And replacing it with this version:
This patch changes the packaging flow to round up the initial rootfs size to the first full multiple of 512MB over the ext4 partition size.
ConclusionI hope this tutorial, with its pre-built SD card image, will help you to get started quickly with Vitis-AI 1.2 on the Avnet platforms.
If there are any specific models which are not yet available for the B2304_lr configuration, which you need for your project, please specify them in the comments below. I will give these more priority.
If there is any other related content that you would like to see, please share your thoughts in the comments below.
Revision History
Comments