Walking is something most able-bodied people spend very little time thinking about, but it’s actually a very complex action. Imagine taking a stroll down to the beach: you walk quickly with long strides on the sidewalk, then carefully down a flight of stairs, and then lift your legs high to trudge through loose sand. Each segment of the journey requires a different gait, with some muscles being stressed more than others depending on the terrain.
When companies like Boston Dynamics build robots, they have to program those different walking styles — and a way for the robot to decide when to use them. DyRET (Dynamic Robot for Embodied Testing), created by Tonnes Nygaard at the University of Oslo in Norway, figures all of that out on its own using machine learning through trial and error. It’s able to take a goal like “walk across this surface as quickly and efficiently as possible,” and come up with the best way to coordinate the movement of its artificial muscles to achieve that.
If you’re familiar with machine learning, you know that it requires inputs and outputs to analyze. The output in this case is how well the robot is able to traverse an area. The inputs are eight parameters that correspond to all of the key aspects of walking. They control the length of DyRET’s stride, how high it lifts its legs, the speed at which it moves, and more. Using this system, DyRET can experiment with walking on new terrain. Each experiment yields a result — the output — and over time the best results are selected, just like with biological evolution. DyRET may be a college research project, but it’s a step towards robots walking like animals do.