Raspberry Pi-Powered Computer Vision System Calculates Uncertainty to Make Robotic Prosthetics Safer

With a camera in a pair of glasses, or on the limb itself, a computer vision system improves locomotion for lower-limb prosthetics.

The team tried two computer vision systems: One worn in glasses, and the other on the leg. (📷: Zhong et al)

Researchers from North Carolina State University and the University of North Carolina at Chapel Hill have released a paper describing how computer vision can aid those with robotic lower-limb prostheses to walk more naturally and safely on uneven surfaces — by incorporating some Raspberry Pi-powered guesswork.

"Lower-limb robotic prosthetics need to execute different behaviours based on the terrain users are walking on," explains Associate Professor Edgar Lobaton, one of the authors on the paper. "The framework we’ve created allows the AI in robotic prostheses to predict the type of terrain users will be stepping on, quantify the uncertainties associated with that prediction, and then incorporate that uncertainty into its decision-making."

To prove the concept, the team built two computer vision systems: One built into a pair of glasses, and another placed on the lower limb itself and built around a Raspberry Pi single-board computer, camera, and inertial measurement unit (IMU). "By leveraging the Bayesian neural networks (BNNs), our framework can quantify the uncertainty caused by different factors (e.g., observation noise, and insufficient or biased training) and produce a calibrated predicted probability for online decision-making," the paper's abstract reads.

"The inference time of our framework on a portable embedded system was less than 80 ms/frame. The results in this study may lead to novel context recognition strategies in reliable decision-making, efficient sensor fusion, and improved intelligent system design in various applications."

"Incorporating computer vision into control software for wearable robotics is an exciting new area of research," adds co-author Helen Huang. "We found that using both cameras worked well, but required a great deal of computing power and may be cost prohibitive. However, we also found that using only the camera mounted on the lower limb worked pretty well – particularly for near-term predictions, such as what the terrain would be like for the next step or two."

The team's work has been published in the journal IEEE Transactions on Automation Science and Engineering under closed-access terms; more information is available on the NCSU website.

Gareth Halfacree
Freelance journalist, technical author, hacker, tinkerer, erstwhile sysadmin. For hire: freelance@halfacree.co.uk.
Latest articles
Sponsored articles
Related articles
Latest articles
Read more
Related articles