Brain-Computer Combo Lets Mute Man With ALS ‘Talk’ Again

Casey Harrell was losing his ability to speak due to amyotrophic lateral sclerosis (ALS), also known as Lou Gehrig’s disease.

“Not being able to communicate is so frustrating and demoralizing. It is like you are trapped,” said Harrell, 45.

But a new brain-computer interface is allowing Harrell to speak to others once more, with his brain providing the words and a computer giving them voice.

Four microelectrode arrays implanted into a brain region responsible for coordinating speech detect the words that Harrell wants to say and sends that information to a computer program.

During Harrell’s first session, the system took 30 minutes to achieve more than 90% word accuracy with a 50-word vocabulary, researchers said.

The decoded words appear on a screen and are read aloud in a voice that sounds like Harrell’s did before he developed ALS.

“The first time we tried the system, he cried with joy as the words he was trying to say correctly appeared on screen. We all did,” said researcher Sergey Stavisky, co-director of the University of California, Davis Neuroprosthetics Lab.

ALS affects the nerve cells that control movement throughout the body, researchers said. It causes a gradual loss of the ability to stand, walk, use hands and even speak.

By the time Harrell entered the study, he had developed weakness in his arms and legs and his speech was very hard to understand.

In July 2023, researchers implanted the electrodes into Harrell’s brain, so that they could record brain activity related to speech.

“We’re really detecting their attempt to move their muscles and talk,” Stavisky explained in a university news release. “We are recording from the part of the brain that’s trying to send these commands to the muscles. And we are basically listening into that, and we’re translating those patterns of brain activity into a phoneme — like a syllable or the unit of speech — and then the words they’re trying to say.”

One of the roadblocks for developing these sort of brain-computer interfaces (BCIs) is that it takes time for the computer to learn how to properly interpret brain signals, researchers said.

“Previous speech BCI systems had frequent word errors. This made it difficult for the user to be understood consistently and was a barrier to communication,” said researcher Dr. David Brandman, an assistant professor of neurosurgery at UC Davis. “Our objective was to develop a system that empowered someone to be understood whenever they wanted to speak.”

During Harrell’s second session with the device, the size of his potential vocabulary increased from 50 to 125,000 words, researchers found.

With just an additional 1.4 hours of training data, the BCI achieved a 90% word accuracy along with this greatly expanded vocabulary.

A report on Harrell’s case was published Aug. 14 in the New England Journal of Medicine.

The system now is at a better than 97% accurate, following 84 data sessions over 32 weeks in which Harrell engaged in more than 248 hours of communication, researchers said.

“At this point, we can decode what Casey is trying to say correctly about 97% of the time, which is better than many commercially available smartphone applications that try to interpret a person’s voice,” Brandman said.

“This technology is transformative because it provides hope for people who want to speak but can’t,” Brandman added. “I hope that technology like this speech BCI will help future patients speak with their family and friends.”

Harrell agreed.

“Something like this technology will help people back into life and society,” he said.

More information

The ALS Association has more on ALS.

SOURCE: University of California, Davis, news release, Aug. 14, 2024

Source: HealthDay

Leave a Reply