Neuroprosthesis Brain-Computer Interface Decodes Speech-Thoughts at Over 60 Words a Minute

Besting its rivals threefold in both speed and accuracy, this brain-computer interface could help people talk again.

Researchers at Stanford University have published a paper claiming a record in thought-to-text communication through a brain-computer interface (BCI), with a subject being able to "talk" at a rate of 62 words per minute — three times faster than rival approaches.

In a preprint, which has not yet gone through peer review, and that was brought to our attention by MIT Technology Review, the team explains the inner workings of its novel "neuroprosthesis" — a brain-computer interface (BCI) which uses intracortical microelectrode arrays to capture high-resolution recordings of its users' brain activity associated with speech.

A new type of brain-computer interface has been shown to vastly outperform the competition for speech recognition. (📹: Willet et al)

To prove the concept, the team recruited a single study participant — an unidentified member of the public suffering from amyotrophic lateral sclerosis (ALS), also known as Lou Gehrig's Disease, whose illness resulted in an inability to generate intelligible speech. When fitted with the neuroprosthesis BCI, the subject was able to think words and have them decoded at a rate of 62 words per minute — over three times faster than previous state-of-the-art BCI speech systems.

Speed is no good without accuracy, of course, but here the team claims a major milestone too: on a limited 50-word vocabulary the system showed a 9.1 percent error rate, almost three times fewer errors than its rivals, and while increasing to a 125,000-word vocabulary boosted the error rate to 23.8 percent it still proved usable.

The sensor system doesn't actually detect thoughts relating to speech, however, but instead focuses on movement — building on earlier work using the same system to control robotic arms or an on-screen keyboard. The subject simply tries to talk, and the implanted sensors record brain activity associated with the movements of the mouth and face relating to speech for decoding by a specially-trained recurrent neural network (RNN) — even if the user's mouth is incapable of actually moving.

"Our demonstration is a proof of concept that decoding attempted speaking movements from intracortical recordings is a promising approach," the researchers admit, "but it is not yet a complete, clinically viable system. Work remains to be done to reduce the time needed to train teh decoder and adapt to change s in neural activity that occur across days without requiring the user to pause and recalibrate the BCI. Perhaps most importantly, a 24 percent word error rate is likely not yet low enough for everyday use."

The preprint is available on Cold Spring Harbor Laboratory's bioRxiv server now, under open access terms.

Gareth Halfacree
Freelance journalist, technical author, hacker, tinkerer, erstwhile sysadmin. For hire: freelance@halfacree.co.uk.
Latest articles
Sponsored articles
Related articles
Latest articles
Read more
Related articles