The ease with which most humans can reach for and grab any number of objects around them might lead one to believe that replicating this ability in robots would be relatively simple. But when delving into the details, it turns out that there is a huge amount of complexity that is very easy to overlook. All of the calculations required to adjust grip for varying hardnesses of different objects and determine trajectories is enough to throttle onboard processing and send power consumption through the roof.
An advancement reported by researchers at FZI Research Center for Information Technology may solve some of these problems by simplifying control of five-fingered robotic hands. They have developed a control method in which artificial neurons in a spiking neural network learn to grasp objects in an adaptive way, more like humans. Spiking neural networks are designed to more closely model biological neurons than traditional artificial neural networks, and are well suited to adapting coordinated motions based on sensor feedback.
Several spiking neural network models work in concert to control different aspects related to object grasping — extending and retracting fingers and various types of grasping motions. These networks control a Schunk SVH five-finger anthropomorphic hand.
The model is able to adjust the force it exerts on an object if the object moves or is otherwise deformed. This adaptability allows the hand to grasp such diverse objects as glass bottles and balloons, with differing shape, stiffness and size.
In the future, the team would like to add a camera to bring visual information into the system. They believe that in doing so, they would be able to achieve a more natural grasping process — from recognition of the object to positioning of the arm to grasping of the object.