A New View Gives Soft Robots Precision Control

MIT’s deep learning-based approach lets soft robots move with precision using just one camera, making them practical for real-world uses.

nickbild
5 months ago Robotics
NJF teaches soft robots to move with precision (📷: S. Li et al.)

For many applications, soft robots are more useful than traditional robots that are made from rigid components. A robot designed to assist a surgeon with a delicate procedure, for instance, is less likely to cause unintentional harm to a patient if it is made of soft materials. And a robot that needs to squeeze into tight spaces is going to be much less likely to get stuck if it can bend and squish to accommodate whatever stands in its way.

But soft robots are still relatively rare outside of research labs. A major reason why they have failed to take off is the challenges associated with controlling their movements. When robots have a tendency to flop around, it is very difficult to predict how their actuators should be adjusted to produce a specific action. Needless to say, if precision is not possible, you do not want these machines doing surgery on you.

A 3D representation of the robot can be created from a single image (📷: S. Li et al.)

This problem may not be so pronounced for soft robots in the near future, however. A group of researchers at MIT has developed a deep learning-based approach that allows them to predict exactly how soft robots will respond to control inputs. And their system does not require impractically large or expensive hardware installations to make this possible — a single camera is enough to get the job done.

The approach, called Neural Jacobian Fields (NJF), replaces traditional modeling techniques with a vision-driven, self-learning control system. Rather than requiring precise mathematical models, physical sensors, or motion-capture systems, NJF teaches the robot to understand its own body through observation. During training, a robot is recorded performing random movements using a multi-camera setup. From these visual inputs alone, NJF learns both the robot’s shape and how different control signals affect its movement.

Once trained, the robot no longer needs all of those cameras. A single, monocular camera is enough to track and control its movements in real time. This allows robots to operate autonomously and accurately in the real world, even when they are made from soft or irregular materials. In tests, the system achieved less than three degrees of error in joint motion and sub-centimeter accuracy in fingertip placement, all without embedded sensors.

The system can accommodate many types of robots (📷: S. Li et al.)

This work could significantly expand the practical uses of soft robotics. Unlike rigid industrial arms that require costly sensors and precise calibration, robots equipped with NJF can adapt to messy, unstructured environments, like farms, warehouses, or disaster zones, using only visual feedback. It also opens the door to more creative and experimental hardware designs, since engineers no longer need to build their robots around the limitations of traditional modeling techniques.

NJF is inspired by the way humans learn to move, which is through trial and error, guided by what we see. It is also an example of a broader shift away from hard-coded control logic and toward learning-based systems that can adapt and improve over time. By giving robots an internal sense of how their bodies work, the system allows for more fluid and natural movements. And that could bring us closer to a world where robots are flexible, adaptive, and accessible to all.

nickbild

R&D, creativity, and building the next big thing you never knew you wanted are my specialties.

Latest Articles