Night Rider

The HawkDrive system utilizes an NVIDIA Jetson, stereo cameras, and an innovative AI algorithm to give self-driving cars clear night vision.

Nick Bild
13 days agoMachine Learning & AI
A HawkDrive installation (📷: Z. Guo et al.)

Low light levels are one of the trickiest situations that computer vision algorithms have to deal with. In such conditions, the lack of sufficient illumination leads to poor contrast and clarity. This poses a significant challenge for algorithms designed to recognize objects, faces, or patterns, as they heavily rely on distinct features and details that cannot be clearly seen. When these algorithms struggle, the likely results include misclassifications and other errors.

For certain applications, like self-driving cars, these sorts of problems are unacceptable. The computer vision systems used in autonomous vehicles play a critical role in identifying obstacles, pedestrians, and road signs to make real-time decisions. However, when faced with low light conditions caused by dusk, dawn, night, or bad weather, traditional computer vision algorithms may falter, compromising the safety of the passengers and the reliability of the vehicle.

To address this issue, researchers and engineers are actively working to develop innovative solutions to improve the performance of computer vision in low-light environments. Focusing their attention on self-driving vehicles, a team of engineers at the Skolkovo Institute of Science and Technology have developed a system called HawkDrive that is designed to give computer vision algorithms better night vision. HawkDrive consists of both hardware and software elements that reduce the risk of failures in perception, navigation, and planning tasks.

The researchers recognized that errors in depth estimations, as well as noise that is introduced into images, are some of the primary issues caused by low light levels. Traditional RGB cameras often capture blurry images with a low-dynamic range under these conditions, and the downstream processing steps are not powerful enough to recover the lost information.

For reasons such as these, HawkDrive leverages a stereo camera system and a novel processing algorithm tuned for understanding night scenes. By capturing pairs of images with two global-shutter cameras from different perspectives, this method is capable of calculating very accurate measurements of depth. The cameras are connected to an NVIDIA Jetson Xavier AGX single-board computer, along with a hardware trigger that ensures image pairs are captured at the exact same moment.

The captured images are processed by a machine learning analysis pipeline consisting of Signal-to-Noise-Ratio-aware transformers and convolutional models. These models were adapted to perform low light enhancement for nighttime driving scenes. In addition, by utilizing a SegFormer-based semantic segmentation network, HawkDrive can enhance images based on both physical and semantic information, increasing the system’s accuracy. These algorithms run locally onboard the Jetson Xavier AGX.

A number of experiments were conducted under low light levels, both with and without the nighttime enhancement of images performed by HawkDrive. With respect to depth estimation, image enhancement was found to reduce errors by 27.16 percent. Furthermore, it was observed that pixel accuracy was boosted by 0.76 percent in semantic segmentation tasks.

The team notes that they are still actively exploring other areas that could further improve their pipeline. But in any case, HawkDrive is a great first step. Pairing hardware like an NVIDIA Jetson with a novel analysis pipeline that can run fully on-device could ultimately lead to the development of a practical night vision system for autonomous vehicles.

Nick Bild
R&D, creativity, and building the next big thing you never knew you wanted are my specialties.
Latest articles
Sponsored articles
Related articles
Latest articles
Read more
Related articles