AI Algorithm for Reading EEG "Error Messages" Brings Brain-Controlled Robots One Step Closer

Designed to quickly figure out what's wrong via machine learning, this approach to brain-machine interfaces shows real promise.

Researchers at the Ecole Polytechnique Fédérale de Lausanne (EPFL) have developed a computer program designed to control a robot through the user's thoughts alone — a key step towards universal accessibility.

"People with a spinal cord injury often experience permanent neurological deficits and severe motor disabilities that prevent them from performing even the simplest tasks, such as grasping an object," explains Aude Billard, professor and the head of EPFL’s Learning Algorithms and Systems Laboratory, of the possible impact of the work. "Assistance from robots could help these people recover some of their lost dexterity, since the robot can execute tasks in their place."

Work on reading "error messages" via EEG could prove key to developing reliable robots controlled by thought alone. (📹: Batzianoulis et al)

Targeting a relatively simple robotic arm, the team sought a way to allow the operator to control it without voice or touch - simply by looking at the robot. An electroencephalogram (EEG) sensor scans the user's brain activity, reading "error messages" when the robot makes a mistake — errors that are contextualized through an inverse reinforcement learning approach, which tries three of five possible solutions to figure out its users' requirements.

"What was particularly difficult in our study was linking a patient's brain activity to the robot's control system — or in other words, 'translating' a patient's brain signals into actions performed by the robot," says lead author Iason Batzianoulis. "We did that by using machine learning to link a given brain signal to a specific task. Then we associated the tasks with individual robot controls so that the robot does what the patient has in mind."

Subjects were initially given control of the arm via joystick, but this was given to a third party in later experiments. (📷: Batzianoulis et al)

"The robot’s AI program can learn rapidly, but you have to tell it when it makes a mistake so that it can correct its behavior," explains José del R. Millán, former head of the EPFL's Brain-Machine Interface Lab and now professor at the University of Texas. "Developing the detection technology for error signals was one of the biggest technical challenges we faced."

The team's next step: The development of a mind-controlled wheelchair, though Billard warns there are hurdles to overcome: "Wheelchairs pose an entirely new set of challenges," he says, "since both the patient and the robot are in motion."

The team's work has been published under open-access terms in the journal Communications Biology.

ghalfacree

Freelance journalist, technical author, hacker, tinkerer, erstwhile sysadmin. For hire: freelance@halfacree.co.uk.

Latest Articles