Hold On Loosely

This AI world model teaches robots how to adapt mid-move to keep fragile objects from slipping from their grasp without crushing them.

nickbild
3 months ago Robotics
Comparing the predictive control architecture of humans with the artificial system (📷: K. Nazari et al.)

People have some seriously amazing dexterity if you think about it. We can carry a huge load of groceries up three flights of stairs without dropping anything, crushing a loaf of bread, or breaking a single egg. And we make it look so easy that it might not even seem like a big deal. But if you think it is easy, then try programming a humanoid robot to do the same thing. Chances are that your groceries will look like they have been run over by a truck before they make it to your front door.

The problem is that robots do not intuitively know how to do the twisting, turning, and balancing that people sometimes have to do to carry a heavy load with care. If something starts to slip, all robots can typically do is increase the strength of their grip. That is not going to work out well for that loaf of bread! But in the future, robots might be more capable in these situations, thanks to the work of a group led by researchers at the University of Lincoln in the UK. They have developed a control algorithm that uses human-like movements, instead of a stronger grip, to maintain a stable grasp.

An overview of the control system (📷: K. Nazari et al.)

Humans rely on what are known as “forward models” in the brain, which are internal simulations that predict what will happen next based on our actions. These predictions happen so quickly that they bypass the delays caused by sensory feedback loops. For example, before your hand even feels a glass slipping, your brain has already guessed that it might happen and has sent signals to adjust your motion. The researchers built a similar world model for robots, powered by data from tactile sensors.

With this model, the robot can forecast the likelihood of an object slipping as it follows a planned motion. If the model predicts trouble ahead, the controller tweaks the movement — maybe slowing it down, shifting the angle, or making other subtle changes — so that the object stays put. In tests using a Franka Emika robotic arm, the system proved to be highly effective, especially in situations where increasing grip force was not possible or was too risky for the object.

At present, the team is working to further refine their approach. They plan to make the controller faster and more efficient for demanding real-time applications, as well as expand its abilities to handle deformable or two-handed objects. They also aim to combine it with computer vision, allowing robots to plan their movements using both tactile and visual data. And they are also exploring ways to make these decision-making processes more explainable, so humans can understand, and trust, what is going on inside the robot’s “mind.”

The code and datasets used in this work can be found on the project's website.

nickbild

R&D, creativity, and building the next big thing you never knew you wanted are my specialties.

Latest Articles