Researchers at ETH Zurich have demonstrated a technique for future soft-skinned robots to sense touch — using embedded cameras linked to a computer vision system, rather than anything that senses touch directly.
"This paper describes the design of a multi-camera optical tactile sensor that provides information about the contact force distribution applied to its soft surface," the team explains in the abstract of the paper. "This information is contained in the motion of spherical particles spread within the surface, which deforms when subject to force. The small embedded cameras capture images of the different particle patterns that are then mapped to the three-dimensional contact force distribution through a machine learning architecture.
"The design proposed in this paper exhibits a larger contact surface and a thinner structure than most of the existing camera-based tactile sensors, without the use of additional reflecting components such as mirrors. A modular implementation of the learning architecture is discussed that facilitates the scalability to larger surfaces such as robotic skins."
In an interview with Tech Xplore, the researchers describe how the low-cost quad-camera system in the functional prototype, constructed from Raspberry Pi Camera Module 2 boards, provides for 65,000 pixels — generating "a large amount of information at very high resolution, which is ideal for a data-driven approach to tactile sensing," while also including force distribution data missing from rival approaches.
The team's paper, submitted to the 2020 IEEE International Conference on Soft Robotics (RoboSoft 2020), is available via arXiv.org now.