A team from the USC Viterbi School of Engineering has turned to generative adversarial networks — the technology normally associated with deepfake videos and generated images like This Person Does Not Exist — to build better brain-computer interfaces for people with disabilities.
"Getting enough data for the algorithms that power BCIs can be difficult, expensive, or even impossible if paralyzed individuals are not able to produce sufficiently robust brain signals," Laurent Itti, professor and co-author of the study, explains. The solution: Using generative adversarial networks to generate synthetic yet useful stand-in data from a small pool of real data.
"Less than a minute’s worth of real data combined with the synthetic data works as well as 20 minutes of real data," claims Shixian Wen, PhD student and lead author, of an experiment which saw real data captured from a monkey reaching for an object fed to the GAN for synthesis. "It is the first time we’ve seen AI generate the recipe for thought or movement via the creation of synthetic spike trains. This research is a critical step towards making BCIs more suitable for real-world use."
Once trained on the synthetic data, the system also proved more adaptable to new sessions and subjects — even with only limited additional data to work with. "That’s the big innovation here — creating fake spike trains that look just like they come from this person as they imagine doing different motions," Itti explains, "then also using this data to assist with learning on the next person."
"When a company is ready to start commercializing a robotic skeleton, robotic arm or speech synthesis system, they should look at this method, because it might help them with accelerating the training and retraining. As for using GAN to improve brain-computer interfaces, I think this is only the beginning."
The team's work has been published under closed-access terms in the journal Nature Biomedical Engineering.