In the rapidly advancing world of technology, alternative human-computer interface (HCI) devices have emerged as important tools, revolutionizing the way individuals interact with computers. These innovative solutions are not only changing the face of computing but also have a profound impact on the lives of disabled individuals, empowering them with newfound opportunities and independence.
Traditionally, computers have relied heavily on keyboards, mice, and touchscreens as primary HCI devices. However, these conventional interfaces pose significant limitations for people with disabilities, including mobility impairments, visual impairments, or conditions that affect motor skills. Fortunately, the advent of alternative HCI devices has paved the way for a more inclusive and accessible computing experience.
These technologies have been particularly transformative for individuals with severe mobility impairments, such as those living with spinal cord injuries or muscular dystrophy. But these interfaces need to be radically different from the traditional options when the target users have very significant physical limitations. Sometimes these options take the form of brain-computer interfaces, in which electrodes are physically implanted into the brain as part of a system that decodes the user’s intent.
But such an invasive device is not everyone’s cup of tea. Hackaday.io user MBW had an idea for a new type of minimally invasive interface that would allow anyone with the ability to blink their eyes to interact with a computer. The prototype device uses computer vision and machine learning to capture an image of a person’s eye and determine if it is open or closed. When a user of the system blinks in Morse code, it will be translated into text or commands that can control any arbitrary computer system.
An Espressif ESP-EYE development board was selected for the project because it is a compact, and also relatively powerful, platform. It also has a Bluetooth transceiver built in, which makes it simple to integrate with many host devices, like smartphones or laptops. This hardware was built into a repurposed face shield frame with the help of a 3D-printed case, such that the ESP-EYE’s camera can point at the wearer’s eye.
As the device captures images, they are passed into a Haar cascade object detector to locate the position of the eye. The detected region is then passed into a machine learning binary classifier that determines whether or not the eye is open. An XGBoost classifier was converted into native C code using m2cgen to optimize the performance of the model on the resource-constrained ESP32. The publicly available MRL Eye Dataset was leveraged to train the initial model.
Over time, the state of the eye, and the duration of each state is tracked. Those states are interpreted as Morse code and translated into text, which can then be sent to a host device wirelessly via Bluetooth, although this final step has yet to be completed.
The finished system was shown to capture images, and run both the object detection and classification pipelines at an impressive rate to 25 frames per second. That should be faster than just about anyone can blink out Morse code in practice.
MBW is planning to implement the Bluetooth functionality next, but that should be much simpler than getting all of the machine learning algorithms running on the ESP32 at a high frame rate. Be sure to check out the project, which has been submitted for this year’s Hackaday Prize.