NASA's cGIANT Looks to Provide Lunar Explorers with Artificially Intelligent Vision-Based Navigation
Using height data processed into 3D vistas, a team of researchers are looking to build a handheld vision-based navigation AI.
NASA is working on a way for explorers to navigate on the moon — replacing the global navigation satellite system (GNSS) that drives terrestrial vehicle route-finding with a vision-based artificial intelligence capable of findings its way through lunar landmarks.
"For safety and science geotagging, it’s important for explorers to know exactly where they are as they explore the lunar landscape," explains Alvin Yew, NASA Goddard Space Flight Center research engineer, of the project. "Equipping an onboard device with a local map would support any mission, whether robotic or human."
NASA has a range of projects on the go for communication and navigation when humans — and robots — return to our moon, including LunaNet — which will provide both data networking and navigation capabilities. The vision-based system, however, will provide a secondary system for navigation — something its creators point out is vital when you're wandering around the moon's face.
"It’s critical to have dependable backup systems when we’re talking about human exploration," Yew explains. "The motivation for me was to enable lunar crater exploration, where the entire horizon would be the crater rim. Conceptually, it’s like going outside and trying to figure out where you are by surveying the horizon and surrounding landmarks. While a ballpark location estimate might be easy for a person, we want to demonstrate accuracy on the ground down to less than 30 feet. This accuracy opens the door to a broad range of mission concepts for future exploration."
The project uses data from the Lunar Orbiter Laser Altimeter (LOLA), taking elevation models and creating a 3D panorama that can be run through an artificial intelligence system to recognize landmarks and provide mapping and navigation. The researchers propose that the data and model be loaded onto a handheld device, based on the Goddard Image Analysis and Navigation Tool (GIANT) previously used to verify navigation data for the OSIRIS-REx mission to asteroid Bennu and publicly available on GitHub, to provide portable landmark recognition capabilities.
More information on the project is available on the NASA website, along with the suggestion that the same technology could be used on Earth as a backup to GNSS systems like GPS in the event of signal loss.
Freelance journalist, technical author, hacker, tinkerer, erstwhile sysadmin. For hire: freelance@halfacree.co.uk.