MicroZed Chronicles: Creating a Zynq or FPGA-Based, Image Processing Platform

One for the hottest applications for FPGA and heterogeneous SoC development is embedded vision. Embedded vision is used across a range of…

Adam Taylor
6 years ago

One for the hottest applications for FPGA and heterogeneous SoC development is embedded vision. Embedded vision is used across a range of applications from the IIoT to vision guided robotics and autonomous driving.

Therefore, over the next few blogs I am going to examine how we can create a simple image processing platform in Vivado. Then demonstrate how we can build upon this platform and make an image processing application using High-Level Synthesis. To demonstrate how to do this, I will be creating a mixture of blogs and projects here. To ensure completeness we will be doing the image processing both in the programmable logic and the Processing system of a Zynq. Along with acceleration from PS to PL, of course.

However, we are not tied to using the Zynq or its Zynq MPSoC bigger brother for image processing applications. Depending upon the application we can select either the Zynq / Zynq MPSoC or more traditional FPGAs such as an Aritx-7 or Spartan-7. Selecting between the two device classes depends on many factors, however one key decision factor is the level of software processing being performed on the captured image. If we want to run high level image processing algorithms, e.g. OpenCV-based applications, we need to select a SoC. Alternatively, if the image processing algorithm is implemented within the programmable logic, we can use a FPGA and implement a softcore processor for control, configuration, and communication.

Regardless of which class of device we select the main image processing chain will be very similar containing:

  • Video Input — Receive the input video stream and converts it to a streaming format. In our case, we want to use the AXI Stream protocol which enables the image data to be transferred between processing elements.
  • Input Video Timing Detection — Detect the incoming video format, this is important to allow the configuration of the image processing chain for the video mode detected.
  • Video Direct Memory — Transfer the image stream into an external frame buffer, either directly connected to the FPGA, or in the case of a SoC it can be connected to a PS DDR allowing the PS to access the image frames.
  • Video Output — Output the processed video stream. For this programmable logic-based solution, this will convert between the AXI Stream and the desired output format.
  • Output Video Timing Generation — Generation of the timing signals for the output video stream.

Of course, within this basic image processing chain, we can add such functions as Color Space Conversion, Image Mixing, Bayer Filter, and eventually the image processing application itself.

Creating this basic image processing pipeline lets us verify if we can receive and transmit images before we integrate the image processing algorithms. That way we know we are building on a solid base when it comes to testing those.

While we could do this using a traditional RTL, it is much faster and easier to work at a higher level and use high level synthesis to create the RTL for implementation. In this way we can work directly with OpenCV to create image processing blocks. How we do this is what we’ll be looking at in the next blog.

You can find the project for creating the base platform project below.

Creating a Zynq or FPGA-Based, Image Processing Platform - This project will demonstrate how to create a simple image processing platform based on the Xilinx Zynq. This project…

Adam Taylor
Adam Taylor is an expert in design and development of embedded systems and FPGA’s for several end applications (Space, Defense, Automotive)
Latest articles
Sponsored articles
Related articles
Latest articles
Read more
Related articles