New York startup CTRL-labs have been developing what they term a ‘neural controller,’ which uses the electrical activity produced by muscles and employs those signals as a type of gestural controller for interacting with computer applications. The company recently announced its set to release the controller as a development kit — complete with SDK and API, at some point in the first quarter of 2019.
The CTRL-kit essentially uses differential electromyography (EMG) to translate electrical impulses into actions by measuring changes to the electric potential from signals sent by the brain to the user’s hand muscles. The controller utilizes 16-electrodes to monitor those signals, and converts them using a machine-learning algorithm (trained with TensorFlow) to distinguish the individual pulses of each and turns them into gestures a computer can recognize. CTRL-labs’ CEO Thomas Reardon explains the technology at the 2018 O’Reilly AI conference in the video below.
According to CTRL-labs, the company’s SDK and API can track the positions and motions of each finger and joint, uses no cameras, and has built-in recognition and classification of common gestures. It’s capable of measuring pinching and grasping forces, and detect muscle tension with or without motion. It also supports integration with VR and AR applications, making gaming a real possibility without the need for external cameras or waving around bulky controllers.
Beyond gaming, CTRL-lab’s feels their Neural Controller could have applications in maintaining complex industrial and aviation systems, remote surgical robotics, or even new ways to interact with musical instruments. The exact release date and price are still unknown, but those interested in getting one can get on the company’s waiting list through their website.