Researchers at Cornell University have developed a stretchable sensor that, they say, can give both robots and virtual reality systems a human-like sense of touch: Stretchable Lightguides for Multimodal Sensing (SLIMS).
"Right now, sensing is done mostly by vision," Associate Professor Rob Shepherd explains of the project. "We hardly ever measure touch in real life. This skin is a way to allow ourselves and machines to measure tactile interactions in a way that we now currently use the cameras in our phones. It’s using vision to measure touch. This is the most convenient and practical way to do it in a scalable way."
The skin is based on a novel sensor dubbed SLIMS, a Stretchable Lightguide for Multimodal Sensing. Based on earlier work from the same lab published in 2016, the SLIMS sensors are made up of two stretchable cores: One core is transparent, while the other core contains dyes at multiple locations and a white LED at the end. As the sensor is stretched, bent, and deformed, an Adafruit Bluetooth board measures the output of light sensors attached to each core — enabling a picture to be built of not only bending but pressure and strain.
To prove the concept, the team created a wearable implementation based on a 3D-printed glove worn over the hand. "VR and AR immersion is based on motion capture. Touch is barely there at all," Shepherd claims. "Let’s say you want to have an augmented reality simulation that teaches you how to fix your car or change a tire. If you had a glove or something that could measure pressure, as well as motion, that augmented reality visualization could say, 'Turn and then stop, so you don’t overtighten your lug nuts.' There’s nothing out there that does that right now, but this is an avenue to do it."
The team also claims the same system could be used to give robots a human-like sense of touch, allowing a robot gripper to detect slippage at considerably less cost than rival higher-resolution sensors.
The paper has been published in the journal Science under closed-access terms.