PoseNet, Normally Used for Pose Estimation, Proves Capable of Giving Robots a Sense of Touch Too

Combining deep learning with touch sensors shows its potential for boosting robotic touch sensitivity to human levels.

A pair of researchers at the University of Bristol have published an article on giving robots the sense of touch using a deep learning technique normally used for pose estimation but fed with optical tactile sensor data.

"Our primary human senses of vision, audition, and touch enable us to interact with a complex and ever-changing environment. Vision and audition are distal senses with which we reason about and plan our interactions," the researchers explain in their article. "In contrast, touch is a proximal sense that enables us to interact directly with our surroundings, either to avoid harm or to explore and manipulate nearby objects. It has become something of a cliché to remark that if we want robots to interact in a useful way with our world, they will need versions of these three senses and the intelligence to use them effectively."

The researchers turned to a tried and tested convolutional neural network (CNN), PoseNet, which is typically used for pose estimation — figuring out where the head, body, arms, and legs are in a video feed. Rather than a video feed, however, the researchers fed the network data from TacTip optical sensors standing in for the robot's sense of touch — with considerable accuracy.

"Overall, the approach for robot touch introduced here offers the potential for safe and precise physical interaction with complex environments, encompassing tasks from exploring natural objects to closed-loop dexterous manipulation," the researchers conclude. "Even though we used the TacTip optical tactile sensor, a similar approach should apply to other high-accuracy tactile sensors, such as the GelSight, provided that they can slide repeatedly across objects without damage."

"This work aims to bring artificial tactile sensing one step closer to human performance and so raises the question of whether humans use similar strategies during their own tactile interactions. In our view, soft tactile sensors such as human fingertips cannot function usefully in natural environments unless they have a perceptual system with invariance to contact motion. As demonstrated here, appropriately trained deep NNs can solve that problem."

The team's work has been published in the journal IEEE Robotics & Automation Magazine for early access, but under closed-access terms; more information is available in a TechXplore interview with co-author Professor Nathan Lepora.

Gareth Halfacree
Freelance journalist, technical author, hacker, tinkerer, erstwhile sysadmin. For hire: freelance@halfacree.co.uk.
Latest articles
Sponsored articles
Related articles
Latest articles
Read more
Related articles