According to a recent study, nearly 20 million Americans have a significant visual impairment. For these individuals, daily life presents a variety of challenges that those with normal vision may not be aware of. Whether one is born with those visual impairments, or acquired them during their lifetime, it profoundly affects their education, work, and social life.
Mobility is a major concern for the visually impaired. Without the ability to see clearly, navigating unfamiliar streets and buildings can be difficult and dangerous. This can lead to a lack of independence and a reduced ability to participate in daily activities. According to the National Federation of the Blind, only 31 percent of employed blind individuals are able to work full-time.
Daily activities such as shopping, cooking, cleaning, and personal grooming can also be difficult for those with vision loss. Simple tasks such as reading labels on food packaging or measuring ingredients can become frustrating and time-consuming. Many visually impaired individuals also struggle with social isolation, as they may feel uncomfortable or unsafe participating in group activities or going out in public.
Despite these challenges, there are a number of organizations and technologies that are working to make life easier for the visually impaired. Some examples include developing new technologies such as voice-controlled devices, and providing training and support for individuals to learn new skills and techniques for completing daily tasks. While these efforts do offer meaningful help, they fall short of fully restoring the independence of individuals with severe vision loss.
New hope has been offered by the recent work of a group of engineers at the University of Colorado Boulder. They have taken a typical walking stick, of the sort that the visually impaired commonly use to aid with navigation, and have given it a measure of intelligence. They have outfitted it with a camera and machine learning algorithms to understand the world around it. The walking stick can use that information to give the user cues — verbal or vibrational — that help them to perform their normal, daily activities.
The device could, for example, capture images of city streets as a person walks. Using the onboard computer vision, it can determine when they need to take an action to get to their destination and give an audible instruction, such as “turn right.” Utilizing the same techniques, the walking stick can also give an assist at the grocery store. If they have a particular type of breakfast cereal on their grocery list, the stick could vibrate when they reach for the correct one.
In a validation of the technology, a small study of 12 participants was conducted. The device was configured to give assistance to the user in finding an open seat at a cafe. While the subjects were all sighted individuals wearing blindfolds, 10 of them were able to successfully navigate to the targeted seat. Much more testing is needed before the device will be ready for real-world use, but these results were certainly promising.
As it currently stands, the machine learning algorithms that power the walking stick’s functionality run on a laptop computer carried around in a backpack by the user. That suffices to prove the concept, but more work will be needed to reduce the computational requirements and, ideally, house all processing power in the walking stick itself. With a bit more testing and miniaturization, a device based on these basic methods may one day offer the visually impaired a much higher degree of independence.