MicroZed Chronicles: PetaLinux Image Processing System

How to create the device tree for a PetaLinux imaging project on the Ultra96.

Adam Taylor
4 months ago

In last week's blog, I examined the elements required to generate the image processing chain in the Vivado Design Suite. This week, we are going to examine the more complex element of it in the PetaLinux build. This requires that we be familiar with device trees.

Once the design is completed in Vivado, the first thing we need to do is export the XSA and create a new PetaLinux project. If you are unsure how to do this, please check out my PetaLinux miniseries (P1, P2, P3 and P4).

To be able to implement an image processing pipeline in PetaLinux, we need to configure the following:

  • Enable I2C support for the OV5640 camera used on the Pcam 5C in the kernel.
  • Enable the V4L2 packages in the root file system.
  • Create the device tree containing the device graph for the image processing pipeline.

Once the PetaLinux project has been created, the next step is to update the kernel device driver to include OV5640 support. Searching in the kernel configuration dialog for the OV5640 will show the location of the driver and the necessary dependencies.

To enable the OV5640 driver, we must first disable the “autoselect ancillary drivers” under the device drivers / multimedia support menu.

Disabling the auto select menu will make visible the option to configure I2C encoders, decoders, sensors, and other helper chips.

Scroll down this menu until you see the OV5640 sensor which we want to use. Enable support for it.

With that complete, save the kernel configuration and exit.

The next step is to enable support for V4L2 in the rootfs. Using the rootfs configuration dialog, we are able to include this package under the PetaLinux package groups.

With the kernel and rootfs configured correctly, we are now ready to start implementing the device graph in our device tree.

As we are making adjustments to the device tree, we need to edit the system-user.dtsi under the meta-user layer to ensure the changes we make are applied in the build process.

Device graphs enable the user to define a collection of IPs within the programmable logic which work together as one element. Each element of the processing chain will therefore have several child ports, each of which contains an endpoint. We will interconnect these endpoints to describe the video processing pipeline. When creating these device graphs, I find it easier to draw them out first before I write the actual device tree.

As such, the device graph diagram for the image processing chain is below.

Implementing this within the device tree is straightforward. We connect each of the endpoints using the port definition of the device tree. Each of the port definitions contains an endpoint and a remote endpoint it connects with. For example, the code element below connects the MIPI CSI-2 RX subsystem with the demosaic.

&mipi_csi2_rx_subsyst_0{
xlnx,vc = <0x4>;
csiss_ports: ports {
#address-cells = <1>;
#size-cells = <0>;
csiss_port0: port@0 {
reg = <0>;
xlnx,video-format = <0>;
xlnx,video-width = <8>;
mipi_csi2_rx_0_to_demosaic_0: endpoint {
remote-endpoint = <&demosaic_0_from_mipi_csi2_rx_0>;
};
};
csiss_port1: port@1 {
reg = <1>;
xlnx,video-format = <0>;
xlnx,video-width = <8>;
csiss_in: endpoint {
data-lanes = <1 2>;
remote-endpoint = <&ov5640_to_mipi_csi2>;
};
};
};
};
&v_demosaic_0 {
compatible = "xlnx,v-demosaic";
reset-gpios =<&gpio 86 GPIO_ACTIVE_LOW>;
ports {
#address-cells = <1>;
#size-cells = <0>;
port@0 {
reg = <0>;
xlnx,video-width = <8>;
demosaic_0_from_mipi_csi2_rx_0: endpoint {
remote-endpoint = <&mipi_csi2_rx_0_to_demosaic_0>;
};
};
port@1 {
reg = <1>;
xlnx,video-width = <8>;
demosaic_0_to_fb: endpoint {
remote-endpoint = <&vcap_in>;
};
};
};
};

Looking at this element of the device tree, you can see each endpoint connects back to the previous remote endpoint.

You will also notice that I have defined in the device tree the reset GPIO which we implemented last week.

Once we have completed our device tree, we are then able to build and package our PetaLinux project and boot it on the Ultra96-V2 development board.

If we have correctly implemented the image processing chain in the device tree, during the boot messages we will see messages reporting the successful probing and registration.

However, the real proof comes once the board has booted. Here, we can use a terminal window to inspect the devices available. We expect to see video, media, and v4l elements available under the /dev/ directory.

Examining the directory for the device indicates that the necessary elements of the image processing chain have been correctly registered.

Finally, to check the image processing chain configuration, we can also use the media-ctl command which is installed as part of the V4L2 PetaLinux package in the rootfs.

This shows that the image processing chain has been correctly implemented in PetaLinux and we can get started now developing our applications.

See My FPGA / SoC Projects: Adam Taylor on Hackster.io

Get the Code: ATaylorCEngFIET (Adam Taylor)

Adam Taylor
Adam Taylor is an expert in design and development of embedded systems and FPGA’s for several end applications (Space, Defense, Automotive)
Related articles
Sponsored articles
Related articles