Walk This Way
This AI-powered robot adapts its walking gait in real-time to handle any kind of terrain, just like animals do.
The real world is a messy place, and that has long caused problems for engineers that want to move their robots out of controlled environments and into the unpredictable and dynamic spaces that lie beyond them. In particular, the problem of adapting a robot’s walking gait to the changing terrains that they encounter as they go from place to place has been a thorn in roboticists’ sides. Moving between grass, sidewalks, and sand, and going up and down hills, for instance, all require a different approach.
In the natural world, animals have no problem handling this. They can walk, trot, run, bound, and jump as needed to efficiently go where they wish. But reproducing this capability that animals make look so easy has been a major challenge in the world of robotics. Some of the best artificial methods available today involve deep reinforcement learning (DRL). While this technology has improved by leaps and bounds in recent years, it still has trouble transitioning between leaps and bounds. Most such algorithms can only master a single type of gait.
Researchers at the University of Leeds and University College London are working to change that, however. They have developed a new DRL-based approach that gives robots the ability to transition between different styles of locomotion to meet whatever difficulties they encounter.
Inspired by the way animals adapt their movements to diverse terrains, the team designed a framework that allows a robot to autonomously select the most appropriate gait in real-time. Instead of pre-programming movement strategies for specific environments, the robot learns to switch between walking, trotting, bounding, and more based on principles observed in nature.
The system is designed to mimic three biologically-inspired mechanisms: gait transition strategies, procedural memory, and real-time motion adjustments. These elements reflect how animals decide when and how to move, remember various gaits for different circumstances, and adapt limb motion on the fly to maintain stability and efficiency.
The researchers trained their robot entirely in simulation using hundreds of virtual terrains, allowing it to develop instinctive, reactive movement patterns. Even though it was never exposed to rough ground during training, the robot had no trouble with it. During testing sessions on real-world terrain — including rocks, roots, mud, and uneven woodchips — it handled each challenge with agility. It even recovered after being deliberately tripped with a broom.
Unlike many current systems that rely heavily on exteroceptive sensors like cameras or radar, this robot uses only interoceptive sensors that measure its joint angles, forces, and balance. This design decision could pay big dividends for robots that must operate in visually obstructed or sensor-compromised environments.
The team sees many potential use cases for their framework in the future, from searching through disaster zones and nuclear facilities to remote exploration and agriculture. In the long run, they envision extending this embodied AI to more complex robots, including humanoids, enabling machines to move with animal-like intelligence and grace.