MicroZed Chronicles: Deep Dive of the Sensor Demosaic and Gamma LUT

A closer look at the Sensor Demosaic and Gamma LUT IP cores.

Over the years we have looked lots at embedded visions systems, as they form a corner stone for many exciting applications, e.g. vision guided robotics, autonomous vehicles, etc.

However, as I was creating a recent embedded vision project targeting the Genesys ZU platform, I noticed several of the key building blocks have been obsoleted and replaced with more capable IP blocks post 2019.1.

Of course, as these blocks are more capable they require a little more configuration to achieve the capability. So I thought a blog showing a deep dive would be of interest.

The complete list of blocks which have been replaced can be found here, but the two main blocks I used in the demo are:

  • Sensor Demosaic: This replaces the color filter array and perform debayer operations to convert raw pixels to RGB.
  • Gamma LUT: This replaces the gamma correction IP and performs gamma correction.

These IP cores are crucial when we are working with video processing systems; without them, we will have to create our own implementations increasing the development time.

One big advantage of the new IP cores over the ones they replaced is they are now bundled with Vivado as such no separate licenses required.

Let's start with the simplest of the two blocks, the Sensor Demosaic. This block converts the RAW pix value received by the sensor into a pixel which contains RGB elements. The size of the pixel depend on the input value for example a 10 bit RAW pixel will result in a 30 bit RGB pixel on the output.

Configuration of the block in Vivado is quite straightforward: all we need to define is the maximum image size, the maximum input pixel width, and the interpolation method.

The rest of the configuration is achieved over the AXI Lite interface at run time. This enables us to change the image size on the fly, which is very useful if high frame rates are required from the sensor by reducing the image size. It is also possible to change the debayer pattern on the fly should the sensor change as development progresses.

Setting up the Sensor Demosaic in SDK or VITIS is pretty straightforward as well. The BSP provides a sensor demosaic driver, xv_demosaic — within this driver are all of the functions needed to get up and running.

Using this driver, we can configure the demosaic IP block to debayer the image and stream out RGB pixels as shown in the code below.

XV_demosaic_Initialize(&cfa, XPAR_V_DEMOSAIC_0_DEVICE_ID);
XV_demosaic_Set_HwReg_width(&cfa, video.width);
XV_demosaic_Set_HwReg_height(&cfa, video.height);
XV_demosaic_Set_HwReg_bayer_phase(&cfa, 0x03);

Once we have converted the RAW pixels to a RGB pixel value, the next stage in many processing pipelines is to correct the image for gamma.

Gamma correction is performed to adapt the linear RGB pixel values to match the non-liner characteristics of displays.

The Gamma LUT IP block allows us to correct for gamma on the fly using software to update the LUT tables within the IP core. Again configuration within Vivado is very simple requiring only the image size, pixel size, and number of pixels per clock.

Within the software environment, we can then use the xv_gamma_lut drivers provided as part of the BSP to configure the IP core.

When it comes to driving this IP core at run time, we also need to configure the image height and width, along with the video format (RGB, YUV, etc).

However, we also need to configure the gamma correction tables, without configuring this the video will be output as blank.

There are several algorithms that can be used for the gamma correction, my recent example used the following approach.

void gamma_calc(float gamma_val)
int i;
for(i = 0; i<1024; i++){
gamma_reg[i] = (pow((i / 1024.0), (1/gamma_val)) * 1024.0);

This populates an array with the gamma correction factors — the size of the array depends on the size of the pixel. An 8-bit pixel requires 256 entries while a 10-bit pixel requires 1,024 entries and so on.

The Gamma LUT IP contains three memory regions to store the look up table. Each one of the regions maps to one color channel:

  • LUT0 = RED Channel
  • LUT1 = Green Channel
  • LUT2 = Blue Channel

This enables the use of different correction factors for different color space channels if desired.

Within the BSP driver, there are several functions which can be used to configure the Gamma LUT. There are functions provided to download the LUT table values, too.

XV_gamma_lut_Initialize(&gamma_inst, XPAR_V_GAMMA_LUT_0_DEVICE_ID);
XV_gamma_lut_Set_HwReg_width(&gamma_inst, video.width);
XV_gamma_lut_Set_HwReg_height(&gamma_inst, video.height);
XV_gamma_lut_Set_HwReg_video_format(&gamma_inst, 0x00);
XV_gamma_lut_Write_HwReg_gamma_lut_0_Bytes(&gamma_inst, 0,(int *) gamma_reg, 2048);
XV_gamma_lut_Write_HwReg_gamma_lut_1_Bytes(&gamma_inst, 0,(int *) gamma_reg, 2048);
XV_gamma_lut_Write_HwReg_gamma_lut_2_Bytes(&gamma_inst, 0,(int *) gamma_reg, 2048);

Once both these IP cores were up and running, I was able to get images through the system with the correct color performance.

Changing the Gamma Tables settings on the fly provided different results for the of the image as can be seen below.

Being able to configure these blocks correctly in our design is a critical aspect for the development of a image processing system.

Now we understand them a little more in depth deploying them in applications will be much easier... I hope!

See My FPGA / SoC Projects: Adam Taylor on Hackster.io

Get the Code: ATaylorCEngFIET (Adam Taylor)

Access the MicroZed Chronicles Archives with over 300 articles on the FPGA / Zynq / Zynq MpSoC updated weekly at MicroZed Chronicles.

Adam Taylor
Adam Taylor is an expert in design and development of embedded systems and FPGA’s for several end applications (Space, Defense, Automotive)
Latest articles
Sponsored articles
Related articles
Latest articles
Read more
Related articles