Autonomous robots use several different methods to navigate any given space, including GPS and LIDAR, which are useful in their own right but lack the ability to use the human sense of sight. Being able to see makes navigating a breeze, and researchers from Carnegie Mellon University and Facebook AI Research have developed a system that enables robots to travel from point A to point B using object identification. For example, if a robot needs to grab a soda from the refrigerator in the kitchen, but is situated in the living room, it can identify a couch as its starting point and the fridge as the endpoint.
To accomplish that feat, the team designed what they term “SemExp,” or Goal-Oriented Semantic Exploration, which uses machine learning to recognize objects. But the system is more than that. It also allows robots to understand those objects are likely to be found, which lets them think strategically about what path to take beforehand.
Typically, robots that navigate using similar object recognition methods to travel will identify objects in specific spaces, e.g. a dining table in a kitchen, and tag them as specific objects in specific areas, meaning those identifiers are static. The SemExp system employs a modular approach, leveraging semantic insights to determine the likely locations where particular objects might be found, which they use in relation to route planning and classic navigation methods.
Combining all those methods of navigation into a single system brings us, humans, one step closer to using natural language to ask robots to “can you grab me a cold Pepsi (or other cold drink), please?” It will be interesting to see how the new SemExp system will evolve, and what robots will ultimately be able to accomplish utilizing the platform.