Enabling speech through thoughts alone

3d rendering of neuron cells with light pulses on a dark background.
image: @dit:K_E_N | iStock

A team of neuroscientists, neurosurgeons and engineers have created a speech prosthesis that can interoperate a person’s brain signals and convert them into speech

In the future, this technology could assist those suffering from neurological disorders who have lost their ability to speak, enabling them to communicate using a brain-computer interface.

“There are many patients who suffer from debilitating motor disorders, like ALS (amyotrophic lateral sclerosis) or locked-in syndrome, that can impair their ability to speak,” said Gregory Cogan, PhD, a professor of neurology at Duke University’s School of Medicine and one of the lead researchers involved in the project. “But the current tools available to allow them to communicate are generally very slow and cumbersome.”

Brain-speech decoder

Currently, the most advanced speech-decoding technology operates at approximately 78 words per minute, significantly slower than the typical human speaking rate of around 150 words per minute.

The delay in decoding speech is partly due to the limited number of brain activity sensors that can be integrated into an ultra-thin material placed on the brain’s surface. Fewer sensors provide a reduced amount of data for interpretation.

Cogan teamed up with fellow Duke Institute for Brain Sciences faculty member Jonathan Viventi, PhD, whose biomedical engineering lab specialises in making high-density, ultra-thin, and flexible brain sensors to improve past limitations.

In this project, Jonathan Viventi and his team achieved a remarkable feat by incorporating 256 tiny brain sensors onto a flexible, medical-grade plastic material no larger than a postage stamp.

Neurons just a grain of sand apart can exhibit vastly different activity patterns when coordinating speech; it is crucial to differentiate signals from adjacent brain cells to make precise predictions about the intended speech.

Testing the brain-speech decoder

Following the creation of the new implant, Cogan and Viventi collaborated with Duke University Hospital neurosurgeons, including Derek Southwell, M.D., Ph.D., Nandan Lad, M.D., Ph.D., and Allan Friedman, M.D.

They enlisted four patients to test the implants; the experiments involved temporarily implanting the device into patients already undergoing brain surgery for conditions like Parkinson’s disease or tumour removal.

In the test, participants were asked to repeat words such as “ava,” “kug,” or “vip”. The device recorded their brain activity as they used speech-related muscles.

Suseendrakumar Duraivel, the primary author of the new study and a graduate student in biomedical engineering at Duke, then used a machine learning algorithm to predict the sounds produced from the brain data. The accuracy was highest (84%) for initial sounds in three-sound sequences, like the “g” in “gak.” Accuracy dropped when identifying sounds in the middle or end of words and when sounds were similar, such as /p/ and /b/.

Decoder accuracy and significance

The decor achieved an accuracy rate of 40%, which is noteworthy considering that comparable brain-to-speech technologies typically necessitate hours or even days of data for accurate predictions. Duraivel’s speech decoding algorithm achieved this level of accuracy using only 90 seconds of spoken data during the 15-minute test.

“We’re now developing the same kind of recording devices, but without any wires,” Cogan said.

“You’d be able to move around, and you wouldn’t have to be tied to an electrical outlet, which is really exciting.”

Duraivel and his mentors are enthusiastic about developing a wireless version of the device, thanks to a recent $2.4 million grant from the National Institutes of Health.

LEAVE A REPLY

Please enter your comment!
Please enter your name here