This is quick tutorial on preparing the PointPillars network to be compatible to Vitis AI NPU and deployment at Versal VEK280 or VEK385 board.
PointPillars turns point clouds into vertical columns (“pillars”), learns features for each pillar from stacked pillars, generates 2D pseudo image, and applies 2D CNNs on pseudo image to detect 3D objects.
- In top view, it is not possible to make the whole model NPU compatible so only the backbone, neck and head of the model is NPU compatible. It is due to the unsupported operations such as tensor slicing, torch.max(), etc and layers like conv1d, batchnorm1d.
- In this tutorial, we made this point pillar model into NPU comaptible.
- Follow the following each steps to make the point-pillar model NPU compatible.
- Make all the functions which are cuda dependent to cpu dependent (if your system do not have cuda).
- To make backbone, neck and head NPU compatible:
- Re-write the model in such a way that it can be easily traceable by the ONNX i.e. replace the module creation using for loop by simple logic and classes, replace ConvTranspose2d.
- Put PillarLayer and PillarEncoder module & post processing function outside of PointPillars module because it contains lots of unsupported layer such as max, tensor slicing, conv1d, batchnorm1d, variable input shape etc.
- Then fuse the weights and bias of Conv2d and BatchNorm2d because during the SNAP generation/dumping, it will raise problem.
- Resize the input of backbone using interpolation because for default input size i.e. [N, 64, 496, 432] our NPU IP variant running in VEK280 do not have sufficient AIE Column.
- Resize to [N, 64, 352, 352], it uses 15 AIE column whereas our NPU IP variant has 16 AIE Column in total. Here, we can see for the resize input size, the model uses
Output from GPU:
Outputform VEK280 Board:
Along with NPU Vitis AI on Versal, we also have deployed the highly optimized version of PointPillar for performance in DPU Vitis AI for MPSoC and Versal Boards.
Kudos to Mohan Lal Shrestha (ML Engineer,LogicTronix) and Dikesh Shakya Banda (ML Acceleration Lead,LogicTronix) for preparing this NPU and Vitis AI tutorial.
For any queries, please contact us at : info@logictronix.com
LogicTronix is AMD-Xilinx Partner for FPGA Design and ML Acceleration.






Comments