Non-invasive brain imaging enables gesture recognition for brain-computer interfaces

A team of researchers from the University of California San Diego has made a significant breakthrough in the field of brain-computer interfaces. They have developed a method to distinguish hand gestures by analyzing data from noninvasive brain imaging alone, without any input from the hands themselves. This advancement has the potential to pave the way for a noninvasive brain-computer interface that could enable individuals with physical challenges such as paralysis or amputated limbs to control devices using their thoughts.

The study, published in the journal Cerebral Cortex, represents a remarkable achievement in differentiating single-hand gestures using magnetoencephalography (MEG), a noninvasive technique. By employing a helmet equipped with a 306-sensor array, MEG detects the magnetic fields generated by electric currents between neurons in the brain. This approach offers several advantages over other brain-computer interface methods, such as electrocorticography (ECoG), which necessitates the surgical implantation of brain surface electrodes, and scalp electroencephalography (EEG), which provides less precise localization of brain activity.

Senior author Mingxiong Huang, Ph.D., co-director of the MEG Center at the Qualcomm Institute at UC San Diego, emphasized the objective of circumventing invasive components in the development of a brain-computer interface. Huang, who is also associated with the UC San Diego Jacobs School of Engineering, UC San Diego School of Medicine, and the Veterans Affairs (VA) San Diego Healthcare System, highlighted MEG as a safe and accurate option for building a brain-computer interface that could ultimately benefit patients.

This research marks a significant step forward in harnessing the potential of noninvasive brain-computer interfaces, opening up possibilities for improving the quality of life for individuals with physical disabilities.

The new research from the Qualcomm Institute at UC San Diego used machine learning and a noninvasive imaging technique called magnetoencephalography (MEG). Illustrated here is the 306-sensor MEG helmet that detects nerve activity in the brain by measuring the magnetic field. Credit: MEG Center at UC San Diego Qualcomm Institute

Roland Lee, MD, co-author of the study and director of the MEG Center at the UC San Diego Qualcomm Institute, along with being an emeritus professor of radiology at UC San Diego School of Medicine and a physician with VA San Diego Healthcare System, highlighted the safety and noninvasiveness of MEG. He explained that MEG allows researchers to observe brain activity without the need for invasive procedures such as implanting electrodes directly into the brain. Instead, they simply need to place the MEG helmet on the patient’s head. This eliminates the risk of electrode breakage, expensive and delicate brain surgery, and potential brain infections.

Dr. Lee compared the process of using MEG to taking a patient’s temperature, emphasizing its noninvasive nature. MEG measures the magnetic energy emitted by the brain, similar to how a thermometer measures the heat emitted by the body. This inherent safety aspect makes MEG a reliable and secure method for studying brain activity.

Rock paper scissors

The recent study involved 12 volunteers who wore the MEG helmet to examine their ability to differentiate hand gestures. The participants were instructed to make various hand gestures from the game Rock Paper Scissors, similar to previous studies. MEG functional data was combined with MRI images to provide structural information about the brain.

To analyze the data, Yifeng (“Troy”) Bu, a Ph.D. student in electrical and computer engineering at UC San Diego, developed a deep learning model called MEG-RPSnet. This model stood out by effectively combining both spatial and temporal features, leading to superior performance compared to previous models.

The results of the study demonstrated that the techniques employed could distinguish between hand gestures with over 85% accuracy. These outcomes were comparable to earlier studies with much smaller sample sizes that used invasive ECoG brain-computer interfaces.

Moreover, the researchers discovered that using MEG measurements from only half of the sampled brain regions still yielded accurate results with a minimal loss of accuracy (around 2-3%). This suggests that future MEG helmets may require fewer sensors.

Yifeng Bu emphasized that this work lays the groundwork for the future development of MEG-based brain-computer interfaces.

The study, titled “Magnetoencephalogram-based brain-computer interface for hand-gesture decoding using deep learning,” was authored by various researchers including Mingxiong Huang, Roland Lee, Deborah L. Harrington, Qian Shen, Annemarie Angeles-Quinto, Hayden Hansen, Zhengwei Ji, Jaqueline Hernandez-Lucas, Jared Baumgartner, Tao Song, Sharon Nichols, Dewleen Baker, Imanuel Lerman, Ramesh Rao, Tuo Lin, and Xin Ming Tu, all affiliated with UC San Diego, UC San Diego School of Medicine, VA San Diego Healthcare System, or Qualcomm Institute.

Source: University of California - San Diego

Leave a Comment