When creating new designs, roboticists often turn to nature for inspiration. Animals have have a remarkable ability to adapt to their environment, and to the limitations of their own bodies. Robots, on the other hand, often become completely useless as soon as they’re put in a situation that they weren’t explicitly programmed to handle. That’s why Disney Research has come up with a new deep learning environment that allows a robot to teach itself to walk on its own.
The robot they’re using for their tests is modular. It has a central body onto which different kinds of legs can be attached. This presents a number of unique scenarios based on how many legs are attached, and what kind they are. For example, the robot might be given just a single leg for locomotion. It then has to figure out the best way to reach a target by using that leg in an optimal way, like a three-legged dog that can still zip around the yard without any apparent difficulty.
It learns how to accomplish that using a vision-based tracking system and two kinds of DRL (deep reinforcement learning) algorithms. The first is TRPO (trust region policy optimization) and the second is DDPG (deep deterministic policy gradient). Both are methods of training a neural network over time. When the robot is allowed to experiment with the legs it’s given, it will eventually learn to move efficiently. The way it uses the legs is often almost identical to a method that was explicitly programmed by the researchers. In practical terms, that means that this could be used to give robots the ability to adapt to a new scenario just like animals do.