RoboCraft Uses Vision Data and a Graph Network to Learn the Secret of Shaping Play Dough

Designed to expand the tasks robots can handle into the realm of dough-like objects — including making dumplings — RoboCraft turns to play.

Scientists from the Massachusetts Institute of Technology (MIT) and Stanford University have trained a robot to carry out a task, which will be immediately familiar to human children: forming play dough into shapes.

"Modeling and manipulating objects with high degrees of freedom are essential capabilities for robots to learn how to enable complex industrial and household interaction tasks, like stuffing dumplings, rolling sushi, and making pottery," explains Yunzhu Li, one of the team working on the RoboCraft project.

RoboCraft sees a robot arm with 3D-printed fingers playing with dough, but for a very good reason. (📹: Shi et al)

"While there’s been recent advances in manipulating clothes and ropes," Li continues, "we found that objects with high plasticity, like dough or plasticine — despite ubiquity in those household and industrial settings — was a largely under-explored territory. With RoboCraft, we learn the dynamics models directly from high-dimensional sensory data, which offers a promising data-driven avenue for us to perform effective planning."

The reason dough-like materials are more difficult for robots to grasp — literally — is that manipulation in one area is likely to cause movement in others. The whole material has to be taken into account when shaping, something that comes naturally to human children but is a challenge for robots.

The RoboCraft sytem uses a graph neural network based on seeing the play dough blob as a collection of particles, using nothing more than visual data from a camera sensor. The cloud of play dough particles is then used to simulate how it will react to manipulation, before the robot carries out the task of shaping the dough into the requested shape.

"RoboCraft essentially demonstrates that this predictive model can be learned in very data-efficient ways to plan motion," Li claims. "In the long run, we are thinking about using various tools to manipulate materials. If you think about dumpling or dough making, just one gripper wouldn’t be able to solve it. Helping the model understand and accomplish longer-horizon planning tasks, such as, how the dough will deform given the current tool, movements and actions, is a next step for future work."

More information on the project is available on the RoboCraft website, while the paper on the topic is available under open-access terms on Cornell's arXiv preprint server. Source code for the project has been promised, but had not yet been released at the time of writing.

Gareth Halfacree
Freelance journalist, technical author, hacker, tinkerer, erstwhile sysadmin. For hire: freelance@halfacree.co.uk.
Latest articles
Sponsored articles
Related articles
Latest articles
Read more
Related articles