This Algorithm Makes It Possible to See through Thick Fog by Tracing the Paths of Photons

Researchers from Stanford have developed an algorithm that works with a LIDAR-esque system to see through thick fog and even foam.

There is little doubt that self-driving cars will eventually become commonplace. Even today, driver-assistance “autopilot” systems, like the kind that have become infamous in Tesla cars, can provide almost complete autonomous control. But, due to safety concerns, the driver is required to remain alert and ready to take back control if necessary. That’s because the sensors and cameras, along with the computer vision systems used to interpret their data, can have difficulty in unusual scenarios. Simple fog, for example, can render LIDAR sensors useless. That’s why researchers from Stanford University have developed an algorithm that can see through thick fog and even foam.

The benefit of this system is obvious, because there are so many situations in which it would be helpful to peer through fog, clouds, dust, and so on. A self-driving car could, for example, see right through the kind of thick, soupy San Francisco fog that we ourselves have trouble with. This could also help pilots see through clouds and help underwater robots navigate through murky water. We are unable to see through thick fog clearly, because the water droplets scatter light in an unpredictable manner. But some of the photons in that light do make it through the fog. This algorithm analyzes those photons and their path in order to reconstruct whatever is hidden in the fog. This provides a rudimentary 3D view of objects we wouldn’t be able to see otherwise.

This system works a lot like a conventional LIDAR sensor, which creates 3D models by shining a laser across the surface of objects and measuring the reflections of light. The problem is that only a small portion of the photons in that light will make it through fog. The researchers solved that problem with a very sensitive photon detector than can pick up those few photons to provide information about when and where they collide with the detector. Some of those photons will have bounced off whatever objects are hidden by the fog, and this lets the algorithm mathematically reconstruct the geometry of those objects. This is computationally intensive and the resulting geometry isn’t exactly high-definition, but it is adequate enough to give a self-driving car the information needed to avoid obstacles and hazards. As it stands, the scanning process is takes too long — anywhere from a minute to an hour — to be practical, but that can be improved with further development.

Cameron Coward
Writer for Hackster News. Proud husband and dog dad. Maker and serial hobbyist.
Latest articles
Sponsored articles
Related articles
Latest articles
Read more
Related articles