Meta Shows Off Its Promised Mind-Reading Wristband, After Canceling Its Brain-Computer Interface
A 2021 breakthrough in BCI technology was ditched, and now Meta's ready to show off what replaced it: a muscle-reading wristband.
Researchers from Meta's Reality Labs have published a paper detailing a wrist-based wearable that provides a human-machine interface by reading muscle activity — a project the company has been working on since it abandoned its brain-computer interface research in 2021.
"We believe that surface electromyography (sEMG) at the wrist is the key to unlocking the next paradigm shift in human-computer interaction (HCI)," the company says in the announcement of its latest research paper. "We successfully prototyped an sEMG wristband with Orion, our first pair of true augmented reality (AR) glasses, but that was just the beginning. Our teams have developed advanced machine learning models that are able to transform neural signals controlling muscles at the wrist into commands that drive people's interactions with the glasses, eliminating the need for traditional — and more cumbersome — forms of input."
Meta announced its project to create an EMG-based wristband back in July 2021, after abandoning a brain-computer interface (BCI) program that had already restored a paralyzed participant's speech. "To our knowledge," lead author Edward Chang said at the time, "this is the first successful demonstration of direct decoding of full words from the brain activity of someone who is paralyzed and cannot speak."
Meta, however, canceled the project. "While we still believe in the long-term potential of head-mounted optical BCI technologies," a spokesperson said, "we've decided to focus our immediate efforts on a different neural interface approach that has a nearer-term path to market: wrist-based devices powered by electromyography."
It's that device that is the focus of the paper published this week, described by its creators as "a generic non-invasive neuromotor interface that enables computer input decoded from surface electromyography (sEMG)" linked to a machine learning model trained on "data from thousands of consenting participants."
"Test users demonstrate a closed-loop median performance of gesture decoding of 0.66 target acquisitions per second in a continuous navigation task," the researchers found, "0.88 gesture detections per second in a discrete-gesture task and handwriting at 20.9 words per minute. We demonstrate that the decoding performance of handwriting models can be further improved by 16 percent by personalizing sEMG decoding models."
The paper has been published in the journal Nature under open-access terms; model implementations and a framework for training and evaluation are available on GitHub under the Creative Commons Attribution-NonCommercial-ShareAlike 4.0 license. At the time of writing, Meta had not disclosed a roadmap to commercialization of the technology.
Freelance journalist, technical author, hacker, tinkerer, erstwhile sysadmin. For hire: freelance@halfacree.co.uk.