Soft robotics is a rapidly growing field that has a huge amount of potential in applications where traditional rigid robots would be unsafe or unwieldy. But, building a soft robot comes with a number of unique challenges, particularly when it comes to actuation and position sensing. Fortunately, a newly-developed soft robotic finger with its own sense of self-perception may dramatically improve the situation.
This work comes from a team of researchers at the Bioinspired Robotics and Design Lab at the University of California San Diego and others around the globe. It’s intended to give soft robots the kind of positional sensing that is innately practical in rigid robots. Because a traditional robot’s frame is inflexible, it’s relatively simple to determine it’s exact position — you only need to measure the angle at each joint. But, due to their inherent flexibility, that’s not so easy with soft robots.
The solution that the researchers came up with was to use a neural network and machine learning to identify correlations between the readings from a motion capture system and flex sensors within the soft robotic finger. The flex sensors were placed somewhat arbitrarily, which would normally be extremely difficult to process through explicit programming. But, by using the neural network, the system is able to match those sensor readings to what it sees in the motion capture system.
That results in the robot developing a limited sense of proprioception, which is the sense that humans and other animals possess that allows us to know how are bodies are positioned. For example, proprioception is how you are able to walk even when your eyes are closed. After training, the motion capture can be removed from the system, and the robotic finger will still be able to position itself accurately, as well as know if an external force is acting on it. It’s a big step forward in giving soft robots the sensing capabilities necessary to work in the real world.