Look Out Below!

Computer vision gives autonomous cars a fighting chance against potholes by telling them what dangers lie ahead.

Nick Bild
3 years agoMachine Learning & AI
(📷: T. Lee et al.)

Everyone has stretches of road that they loathe driving on because of the window-splitting, axle-cracking potholes found there. The American Automobile Association estimates that pothole-related damage costs drivers in the US over three billion dollars per year in repairs. Consequences can be far more severe than just automobile damage; in India over three thousand people are killed each year in accidents involving potholes. The danger of potholes only increases when it comes to self-driving cars, which can lose control if they hit one.

With these problems in mind, researchers at the Korea Institute of Civil Engineering and Building Technology have developed a machine learning-based approach to give autonomous cars an awareness of nearby potholes. The system uses computer vision to watch the road ahead and detect any potential problems coming up.

The problem is deceptively complicated in real-world scenarios — changes in lighting and weather can dramatically impact the appearance of a pothole to a computational algorithm. To overcome these challenges, the team developed a preprocessing pipeline consisting of a convolutional neural network (CNN) that calculates an enhancement factor, then adjusts the brightness of the road surface image for maximal effectiveness in the next step. Those images are then fed into a fully convolutional neural network model that serves as a road-crack segmentation model. The output of this model shows which parts of the road are damaged, and which are not.

To collect the training data, a camera was attached to the top of a vehicle’s windshield, such that it had a view of the road while driving. While traveling at various speeds, a total of 14,400 road-surface images were captured. These images were then classified into six categories: artificial joints, road markings, roadside structures, shadows on the road, vehicle images, and road cracks. These images were split between training, validation, and test datasets, then used to train the model. Running the model against the test dataset yielded an F1-score of 0.85. This performance degraded substantially when manually changing image brightness level and not using the preprocessing step, which showed the value of the preprocessing CNN step.

This initial proof-of-concept work was done on a small scale, with only a single vehicle collecting 14,400 images from a geographically-limited area. Further, the study only sought to determine whether or not there was road damage, and not to classify the type of damage. Additional work will be needed to develop a high-performance model that detects the type of road damage present in a wide array of environments, but this research represents a great first step towards that goal.

Nick Bild
R&D, creativity, and building the next big thing you never knew you wanted are my specialties.
Latest articles
Sponsored articles
Related articles
Latest articles
Read more
Related articles