New Approach to Autonomous Vehicles Lets Cars Create and Learn From Memories

Cornell researchers are studying the ability for autonomous vehicles to use past traversals to “learn the way” to familiar destinations.

Researchers at Cornell used a car equipped with LiDAR to compile a dataset of routes around Ithaca that autonomous driving systems can recall when driving. (📷: Cornell University)

Many in the auto and tech industries have been pushing for self-driving cars as the future of transportation for quite a few years now—notably Tesla, but Uber and Google have invested money and resources into their own efforts. Autonomous vehicles need to be able to navigate city streets, as well as less busy environments, using AI to navigate routes and to recognize pedestrians, other vehicles, and a variety of potential obstacles. Yet these efforts have hit many significant roadblocks to producing safe and reliable autonomous vehicles.

Self-driving AI is trained to sense the car’s surroundings with the help of artificial neural networks. However, these cars are in a constant state of “seeing” the world for the first time. Without memories, the cars cannot develop a sense of conditions and changes on regular routes and have difficulty in adverse weather conditions when it is impossible to safely rely on the sensors.

Researchers at the Cornell Ann S. Bowers College of Computing and Information Science and the College of Engineering have been working with the goal of overcoming this set of limitations. They seek to provide autonomous vehicles with the ability to create “memories” of previous experiences for use in future navigations. For instance, a shape like a tree that might be mistaken as a person when first scanned at a distance can be clearly recognized up close and recognized immediately each subsequent time it is passed, even in snow or fog.

The team at Cornell compiled a dataset by driving a car equipped with LiDAR, or Light Detection and Ranging, repeatedly along a roughly ten-mile loop in and around Ithaca, NY, capturing a range of conditions and times of day, as well as environments including urban streets, the college campus, and highways. The resulting dataset has more than 600,000 scenes recorded.

One part of the new approach to training vehicles, called HINDSIGHT, uses neural networks to compute descriptors of objects along a route as the car passes them. These descriptions are then compressed and stored on a virtual map, dubbed SQuaSH or Spatial-Quantized Sparse History. Any time the self-driving car traverses a route it has driven before, it can query the local SQuaSH database. This database is also continuously updated and shared across vehicles, enriching the information available. Together, HINDSIGHT and SQaSH are called Ithaca365.

The third component of Cornell researchers’ self-driving system is called MODEST — Mobile Object Detection with Ephemerality and Self-Training. To start, the neural network in the vehicle has no knowledge of any objects or streets; it has never been exposed to them at all and rather learns the entire perception pipeline from scratch. Through multiple traversals of the same route, the vehicle slowly teaches itself to differentiate between other traffic participants and objects that are safe to ignore. The algorithm can eventually learn to detect these objects reliably, even on routes it did not “learn.”

The hope for this research is that both Ithaca365 and MODEST reduce development costs for self-driving cars, making autonomous vehicles more efficient by allowing them to learn the routes they will most often be used on. A video detailing the process and potential for the research is available via Cornell. Or, right here...


Latest Articles