Researchers Use a GoPro and a Cheap Grabber Tool to Train a Robot Arm in the Art of Manipulation

Linked together with a 3D-printed mount, the camera and the grabber prove a great way to capture data for visual training.

A team of researchers from the University of California at Berkeley, Carnegie Mellon University, New York University, and Facebook AI Research have released a paper demonstrating a new technique for training robots — by strapping a GoPro camera to a cheap off-the-shelf pincer-style grabber.

"Visual imitation learning provides a framework for learning complex manipulation behaviors by leveraging human demonstrations. However, current interfaces for imitation such as kinesthetic teaching or teleoperation prohibitively restrict our ability to efficiently collect large-scale data in the wild," the researchers explain, in a paper brought to our attention by VentureBeat, of the core problem they are seeking to address. "Obtaining such diverse demonstration data is paramount for the generalization of learned skills to novel scenarios. In this work, we present an alternate interface for imitation that simplifies the data collection process while allowing for easy transfer to robots."

The team is far from the first to discover the utility of low-cost, off-the-shelf pincer-style grabbing tools, typically sold as assistive technologies for those with limited mobility: Hello Robot's low-cost mobile manipulator robot uses a grasping tool adapted from an off-the-shelf part ordered from Amazon by co-founder Charlie Kemp, who describes it as "such a weird looking thing" which "just blew away the other grabbers."

"We use commercially available reacher-grabber assistive tools both as a data collection device and as the robot’s end-effector. To extract action information from these visual demonstrations, we use off-the-shelf Structure from Motion (SfM) techniques in addition to training a finger detection network. We experimentally evaluate on two challenging tasks: non-prehensile pushing and prehensile stacking, with 1000 diverse demonstrations for each task. For both tasks, we use standard behavior cloning to learn executable policies from the previously collected offline demonstrations."

In the visual learning system, the grabber serves two purposes: One is as a manipulator for the robot arm itself, and the other is as a tool used by a human to carry out basic manipulation tasks. These tasks are captured by a GoPro camera connected onto the arm via a custom 3D-printed mount, and it's this captured video which is used to train the robot arm.

"To improve learning performance," the team writes, "we employ a variety of data augmentations and provide an extensive analysis of its effects. Finally, we demonstrate the utility of our interface by evaluating on real robotic scenarios with previously unseen objects and achieve an 87 percent success rate on pushing and a 62 percent success rate on stacking."

The paper is available now under open-access terms on arXiv.org, with more information available on the project website — including Python source code, published under an unspecified license.

Gareth Halfacree
Freelance journalist, technical author, hacker, tinkerer, erstwhile sysadmin. For hire: freelance@halfacree.co.uk.
Latest articles
Sponsored articles
Related articles
Latest articles
Read more
Related articles