Magic Carpet

This intelligent carpet uses tactile sensors and machine learning to determine the body position of anyone that steps on it.

Nick Bild
3 years agoMachine Learning & AI
Data collection (📷: Y. Luo et al.)

Estimation of human poses has applications in controlling computers, gaming, healthcare, robotics, and more. Much work has been done in this area, typically using camera-based approaches. Software packages such as OpenPose are now widely in use, and effective for many use cases. These camera-based solutions do have some inherent problems, however. Any occlusions between the camera and the subject will prevent them from functioning. Moreover, a camera is intrusive and raises privacy concerns that make vision-based solutions unacceptable for many uses.

Recognizing that most human activities are dependent on contact with the environment, a team of researchers at MIT CSAIL has developed a pose estimation technique that relies on tactile interactions between humans and the floor. Their 36-square-foot intelligent carpet contains 9,000 embedded piezoresistive pressure sensors and is able to collect high-resolution, real-time data on human-ground tactile interactions.

Previous studies have attempted to estimate human pose when a large portion of the body is directly contacting a sensing surface. In this case, the team set out to estimate 3D body positions from normal daily activities with limited contacts between subject and sensor — in some cases only the feet. These sensor readings were correlated with poses with the help of a deep neural network.

A dataset consisting of 1.8 million synchronized frames of tactile and visual data was collected from ten volunteers performing fifteen different activities. The visual data was solely used to assign ground truth labels to the tactile sensor data. The neural network model was trained with this data to correlate sensor data with a 3D pose consisting of twenty-one keypoints, including head, neck, shoulders, elbows, waists, hips, knees, ankles, heels, and small and big toes.

The model was first evaluated for accuracy in single-person pose detection. Performance was quite acceptable, with average keypoint localization errors that were found to be less than ten centimeters. Errors were greater with upper body keypoints, which intuitively makes sense — hand positioning has less bearing on pressure maps of the feet than, say, the knees. The model was found to be well generalized in dealing with individuals that were previously unseen, however, it performed poorly when shown new poses that it was not trained on.

When tested for multi-person pose estimation, the performance of the model was somewhat degraded, but was still able to localize each individual and predict their 3D poses with a localization error of less than fifteen centimeters. At present, the same network is used for both single- and multi-person pose estimations. The researchers would like to add a region proposal network to the processing pipeline in the future to improve the performance of multi-person pose estimations.

A secondary evaluation was conducted to assess the model’s ability to classify human actions (e.g.: push-ups, sit-ups, and rolling). A very impressive accuracy of 97.8% was achieved in this task.

This low-cost, approximately $100, tactile sensing carpet opens up new opportunities for pose estimation where visual obstructions or privacy considerations limit the utility of previous approaches.

Nick Bild
R&D, creativity, and building the next big thing you never knew you wanted are my specialties.
Latest articles
Sponsored articles
Related articles
Latest articles
Read more
Related articles