Having looked last week at Vitis for embedded development systems, going forward we are going to be exploring how to use Vitis to accelerate our applications at the edge and in the cloud.
Before I do that, however, I think it would be a good idea to look at the libraries which are available to help us accelerate our applications.
At the highest level, Vitis libraries can be split into two groups:
- Common libraries: These libraries provide basic functions which are used across a range of applications and domains for example maths, DSP and linear algebra.
- Domain specific: These libraries provide acceleration functions for specific domains, e.g. security, vision, finance or database.
If you want to explore the Vitis libraries, they are available on GitHub.
What is very interesting is these libraries provide three different levels of implementation, with each implementation level increasing the level of abstraction.
- Level one: The lowest level of implementation, intended for use in a High Level Synthesis flow. These could then be implemented in Vivado or used as part of a new kernel development.
- Level two: Middle level provides acceleration kernels that are used in the Vitis design flow and the Xilinx RunTime (XRT).
- Level three: Highest level provides applications created from several acceleration kernels. This uses API and of course XRT.
Let's explore the different levels in more detail using the Vision library as an example.
Starting with the lowest level and looking at an accumulate example, we can see the created accumulate_accel function contains a call to the XF::CV::Accumulate function.
As such, we have to ensure the input and outputs of the functions are in the xf::CD::Mat format. We can also use the template declaration to define the types, image size and number of pixels per clock.
This level one implementation can be pulled into a new kernel or implemented within Vivado where we need to correctly map in the interfaces with the surrounding design.
Exploring the same accumulate example for level two shows the same XF::CV::Accumulate function; however, this time the example also includes all of the necessary interfacing and conversion functions to work as part of the Vitis flow and XRT.
For the level two flow, the accelerated kernel is then deployed using OpenCL the code for which is available in the example folder as well. In this code, you can see the OpenCL functions being used to load and start the kernel.
Finally, level three provides an example application such as color detection, corner detection, etc.
Again, the accelerated kernel is implemented using the several XF::CV functions, while the kernel is implemented using OpenCL.
These three different levels of library implementation enable us to work at different levels of abstraction as required for the challenge at hand.
Obviously, the most productive approach is to whenever possible work at the highest level of abstraction possible.
Over the next few weeks we will be looking at how we can use these libraries in for both edge and cloud applications. Hopefully now we understand a little more about the libraries available and the strucutre of them.
See My FPGA / SoC Projects: Adam Taylor on Hackster.io
Get the Code: ATaylorCEngFIET (Adam Taylor)
Access the MicroZed Chronicles Archives with over 300 articles on the FPGA / Zynq / Zynq MpSoC updated weekly at MicroZed Chronicles.