A team of researchers from MIT, the MIT-IBM Watson AI Lab, and the University of California at San Diego have unveiled a simulation environment designed to help robots get to grips — literally — with soft objects: PlasticineLab.
Named for the children's modeling clay, PlasticineLab has one goal: Make it easier to train robots in handling soft objects which bend, stretch, and deform — and won't necessarily always spring back to their original shape afterwards. To do that, it gives the robots a series of tasks to carry out.
"In each task, the agent uses manipulators to deform the plasticine into a desired configuration," the team explains. "The underlying physics engine supports differentiable elastic and plastic deformation using the DiffTaichi system, posing many under-explored challenges to robotic agents."
The secret sauce behind PlasticineLab: Baked-in physics equations. "Programming a basic knowledge of physics into the simulator makes the learning process more efficient," notes lead author Zhiao Huang. "This gives the robot a more intuitive sense of the real world, which is full of living things and deformable objects."
"It can take thousands of iterations for a robot to master a task through the trial-and-error technique of reinforcement learning, which is commonly used to train robots in simulation," adds senior author Chuang Gan. "We show it can be done much faster by baking in some knowledge of physics, which allows the robot to use gradient-based planning algorithms to learn."
The team's paper is available on arXiv.org under open-access terms, but while the researchers have promised to make PlasticineLab publicly available the official website does not yet include a link to the source code.