Mind-controlled Robots

Two EPFL research groups teamed up to develop a machine-learning program that can be connected to a human brain and used to command a robot. The program adjusts the robot’s movements based on electrical signals from the brain. The hope is that with this invention, tetraplegic patients will be able to carry out more day-to-day activities on their own. Tetraplegic patients are prisoners of their own bodies, unable to speak or perform the slightest movement. Researchers have been working for years to develop systems that can help these patients carry out some tasks on their own.

People with a spinal cord injury often experience permanent neurological deficits and severe motor disabilities that prevent them from performing even the simplest tasks, such as grasping an object,” says Prof. Aude Billard, the head of EPFL’s Learning Algorithms and Systems Laboratory. “Assistance from robots could help these people recover some of their lost dexterity, since the robot can execute tasks in their place.”

Prof. Billard carried out a study with Prof. José del R. Millán, who at the time was the head of EPFL’s Brain-Machine Interface Laboratory but has since moved to the University of Texas. The two research groups have developed a computer program that can control a robot using electrical signals emitted by a patient’s brain. No voice control or touch function is needed; patients can move the robot simply with their thoughts. The study has been published in Communications Biology, an open-access journal from Nature Portfolio.

To develop their system, the researchers started with a robotic arm that had been developed several years ago. This arm can move back and forth from right to left, reposition objects in front of it and get around objects in its path. “In our study we programmed a robot to avoid obstacles, but we could have selected any other kind of task, like filling a glass of water or pushing or pulling an object,” says Prof. Billard. This entailed developing an algorithm that could adjust the robot’s movements based only on a patient’s thoughts. The algorithm was connected to a headcap equipped with electrodes for running electroencephalogram (EEG) scans of a patient’s brain activity. To use the system, all the patient needs to do is look at the robot. If the robot makes an incorrect move, the patient’s brain will emit an “error message” through a clearly identifiable signal, as if the patient is saying “No, not like that.” The robot will then understand that what it’s doing is wrong – but at first it won’t know exactly why. For instance, did it get too close to, or too far away from, the object? To help the robot find the right answer, the error message is fed into the algorithm, which uses an inverse reinforcement learning approach to work out what the patient wants and what actions the robot needs to take. This is done through a trial-and-error process whereby the robot tries out different movements to see which one is correct.

The process goes pretty quickly – only three to five attempts are usually needed for the robot to figure out the right response and execute the patient’s wishes. “The robot’s AI program can learn rapidly, but you have to tell it when it makes a mistake so that it can correct its behavior,” says Prof. Millán. “Developing the detection technology for error signals was one of the biggest technical challenges we faced.” Iason Batzianoulis, the study’s lead author, adds: “What was particularly difficult in our study was linking a patient’s brain activity to the robot’s control system – or in other words, ‘translating’ a patient’s brain signals into actions performed by the robot. We did that by using machine learning to link a given brain signal to a specific task. Then we associated the tasks with individual robot controls so that the robot does what the patient has in mind.

Source: https://actu.epfl.ch/

How to Track the Brain as it Generates an Antonym

A study using epilepsy patients undergoing surgery has given neuroscientists an opportunity to track in unprecedented detail the movement of a thought through the human brain, all the way from inspiration to response. The findings, published in 2018, confirmed the role of the prefrontal cortex as the coordinator of complex interactions between different regions, linking our perception with action and serving as what can be considered the “glue of cognition“.

Previous efforts to measure the passing of information from one area to the other have relied on processes such as electroencephalography (EEG) or functional magnetic resonance imaging (fMRI), which, while non-invasive, offer less than perfect resolution. The study led by researchers from the University of California, Berkeley, recorded the electrical activity of neurons using a precise technique called electrocorticograhy (ECoG). This required hundreds of tiny electrodes to be placed right up against the cortex, providing more spatial detail than EEG and improving the resolution in time of fMRI. While this poses an unethical level of risk for your average volunteer, patients undergoing surgery for epilepsy have their brain activity monitored in this very way, giving the researchers a perfect chance to conduct a few tests.

Each of the 16 test subjects performed a number of tasks varied to suit their individual arrangement of electrodes, all while having their neural activity monitored and tracked. Participants were required to listen to a stimulus and respond, or watch images of faces or animals on a screen and asked to perform an action. Some tasks were more complex than others; for example, a simple action involved simply repeating a word, while a more complex version was to think of its antonym.

Researchers monitored the split-second movement of electrical activity from one area – such as areas associated with interpreting auditory stimuli – to the prefrontal cortex, to areas required to shape an action, such as the motor cortex.

Tracking the Brain as it Generates an Antonym. Click on the image to enjoy the video.

While none of this threw up any surprises, the results clearly emphasized the role of the prefrontal cortex in directing activity.

For some tasks its input was fairly limited. In others the area was required to work hard, managing signals from multiple parts of the brain to coordinate the recognition of words, possibly dredging up memories before setting to work a bunch of muscles to provide a novel answer. “These very selective studies have found that the frontal cortex is the orchestrator, linking things together for a final output,” neuroscientist Robert Knight from UC Berkeley said at the time. “It’s the glue of cognition.”

The prefrontal cortex was seen to remain active throughout most of the thought process, as would be expected for a multitasking region of the brain. The quicker the handoff from one area to the other, the faster people responded to a stimulus. “fMRI studies often find that when a task gets progressively harder, we see more activity in the brain, and the prefrontal cortex in particular,” said the study’s lead author, neuroscientist Avgusta Shestyuk. “Here, we are able to see that this is not because the neurons are working really, really hard and firing all the time, but rather, more areas of the cortex are getting recruited.

What did come as something of a surprise were details on the precise timing of each area. Some of the responding areas lit up remarkably early, often during the stimulus, suggesting that even before we have a complete response handy, our brain is already getting those parts of the cortex ready for action. “This might explain why people sometimes say things before they think,” suggests Shestyuk.

This research was published in Nature Human Behaviour.

Source: https://www.sciencealert.com/

How To Recreate Memories Of Faces From Brain Data

A new technique developed by neuroscientists at the University of Toronto can reconstruct images of what people perceive based on their brain activity. The technique developed by Dan Nemrodov, a postdoctoral fellow in Assistant Professor Adrian Nestor’s lab at U of T Scarborough, is able to digitally reconstruct images seen by test subjects based on electroencephalography (EEG) data.

CLICK ON THE IMAGE TO ENJOY THE VIDEO

When we see something, our brain creates a mental percept, which is essentially a mental impression of that thing. We were able to capture this percept using EEG to get a direct illustration of what’s happening in the brain during this process,” says Nemrodov.

For the study, test subjects hooked up to EEG equipment were shown images of faces. Their brain activity was recorded and then used to digitally recreate the image in the subject’s mind using a technique based on machine learning algorithms. It’s not the first time researchers have been able to reconstruct images based on visual stimuli using neuroimaging techniques. The current method was pioneered by Nestor, who successfully reconstructed facial images from functional magnetic resonance imaging (fMRI) data in the past, but this is the first time EEG has been used.

Source: https://www.reuters.com/
A
ND
https://www.utoronto.ca/