Health

AI brings researchers one step closer to restoring speech in people with paralysis

New research shows how a computer avatar can speak the words that someone with a brain injury was thinking. While years away from commercial application, the researchers and others consider it a significant development in forming words quickly — and out loud — by interpreting brain signals.

New technology is 'big advance' in interpreting brain signals to let someone speak, say researchers

A woman sits in a wheelchair with wires attached to her head and there's an avatar on the screen in front of her.
Ann, a research participant in the UCSF study of speech neuroprostheses, uses a digital link wired to her cortex to interface with an avatar on May 22 in El Cerrito, Calif. At left is UCSF clinical research coordinator Max Dougherty. (Noah Berger)

With a Toronto Blue Jays cap on his head, William Johnson turns to his wife Ann and asks how she's feeling about the baseball team. 

"Anything is possible," responds Ann, 48, who lives in Regina.

Her husband quips back saying that it seems like she doesn't have a whole lot of confidence in them. 

At this, Ann giggles, pauses and says, "You are right about that." 

It's the first conversation the couple has had in 18 years with Ann's own voice, recorded as part of a clinical trial in California Ann is taking part in.

When she was 30 years old, Ann had a brainstem stroke that left her unable to speak. She was diagnosed with locked-in syndrome, meaning she can't talk and has limited movements. 

Since then, simple conversations could take several minutes, as Ann relies on devices that require her to spell out each word with eye movements.  

But new scientific advancements show how artificial intelligence (AI) is making it easier for people with brain injuries to have more fluent conversations — like the one Ann had with her husband about the Blue Jays. 

Two people sit side by side and look at a screen with an avatar on it.
Ann, left, and her husband William Johnson, right, look at her virtual avatar on the screen that is helping them communicate. (UCSF)

The research published in the journal Nature Wednesday, shows how phrases that Ann is thinking can then be spoken, in her own voice, by an online avatar. While years away from commercial application, the researchers and others consider it a significant development in forming words quickly — and out loud — by interpreting brain signals.

"This is a really, really big advance," said Margaret Seaton, a clinical research coordinator at the University of California San Francisco (UCSF), who worked on the study. 

"[Ann] described it as extremely emotional to hear her own voice after over 18 years of not having her voice." 

WATCH | How artificial intelligence gave a stoke victim her voice back:

AI helps stroke victim use her voice after 20 years

1 year ago
Duration 2:01

Faster than previous speech tools

During an online press conference Tuesday, the study's principal investigator, Edward Chang, said that "speech loss after injury is devastating." 

"Speech isn't just about communicating words, but also who we are, our voice and expressions are part of our identity," said Chang, who is also a neurological surgery professor at UCSF's Weill Institute for Neurosciences.  

For many Canadians, this sort of paralysis that leaves them unable to speak can come about from a brain injury caused by an accident or stoke, or even a diagnosis like amyotrophic lateral sclerosis (ALS). 

The ability for researchers to convert brain signals into words isn't new, but the speed at which the technology is operating at and having the words spoken by a virtual avatar is what makes this latest study a significant one in the field, say experts. 

Sunnybrook Hospital neurologist and head of Canada's largest ALS clinic, Dr. Lorne Zinman, says the devices in this research are an "incredible innovation." 

"The majority of patients with ALS are going to develop speech difficulties and many will lose their ability to speak," said Zinman. 

"The development of new technologies to allow them to communicate can have a major impact on improving their quality of life."

Two women sit side by side looking at a screen.
Margaret Seaton, right, sits beside Ann, left, as she uses a traditional device that measures her eye movements in order to type out each individual word. (UCSF)

About two years ago, Chang and his team at UCSF showed how electrodes implanted into a person's brain can transcode neural activity into written words on a screen. 

At the time, the technology was only able to register about 15 words a minute, but the group's latest research shows how advancements have made it possible to register 78 words per minute. 

On average, the typical person speaks 150 to 200 words per minute, so while it's still not at par with regular speech, researchers say they are getting closer to restoring a natural flow. 

"We believe that these results are important, because it opens the door for new applications where people with paralysis will have personalized interactions with their family, friends," said Chang. 

A man sits in front of a computer screen.
Dr. Lorne Zinman is a neurologist at Sunnybrook Hospital in Toronto, where he also heads the biggest ALS clinic in Canada. (CBC)

Device got 75% of words right 

With this particular study, Chang and his team implanted a sheet of 253 electrodes onto the surface of Ann's brain over areas that are known to be crucial for speech production. 

In order for a person to speak, the brain sends signals to different parts of the face, like the tongue, jaw and lips. But with Ann, the stroke left her unable to respond to these signals. 

To pick up on these signals her brain was trying to transmit, the researchers placed a port in Ann's head and used a cable to connect the electrodes in her brain to a series of computers.

For about two weeks, Ann worked with the system — repeatedly trying to silently say different phrases by moving her mouth as much as she could.

The phrases involved more than 1,000 words, which Seaton says covers 85 per cent of the average person's daily vocabulary. 

The data collected from this was then fed to artificial intelligence algorithms to train the system to recognize what sort of signals Ann's brain sends out to develop different word sounds. 

The researchers then did a testing phase, where Ann would think the phrases and the algorithm would be able to verbalize them, depending on what activity it picked up from her brain.

A woman sits with wires attached to her head as she looks at an avatar on a screen.
Ann, a participant in Dr. Eddie Chang’s study of speech neuroprostheses, uses a digital link wired to her cortex to interface with an avatar on Monday, May 22, 2023, in El Cerrito, Calif. (Noah Berger)

Findings show that the computer got three out of every four words correct — so 25 per cent of the time, the algorithm got the word that Ann wanted to say wrong. 

"It was really exciting to see how quickly she was being able to get the computer to understand what she was trying to say," said her husband William. 

And the researchers were also able to personalize the voice coming from the avatar to be Ann's, by creating an algorithm to synthesize speech and feeding it a recording that they had of her voice from when she spoke at her wedding. 

The study by the University of California researchers was released alongside another in Nature Wednesday that was done by Francis Willett and his team from Stanford University.

Willett's study also looks at ways to collect brain activity and convert it into intended words, but they did so by monitoring individual neurons in the brain with a series of very small electrodes, and found a person with ALS was able to communicate 62 words per minute in written words through a device. 

Right now, Zinman says the ALS patients he sees at Sunnybrook can communicate in a few different ways. 

At first, he says a patient might type or write, but the disease often eventually takes away their ability to move. 

In this case, he says people can use a device that relies on their eye movements to spell out words. 

"You can imagine how long it would take to spell out a sentence with your eyes," he said. 

With these new devices, Zinman says the person only has to think of a word for it to appear. 

"That's the real exciting part about these brain-computer interfaces," he said, adding that it will allow patients to actually converse with loved ones. 

Years away from commercial devices 

Despite how significant these findings are, the University of California researchers acknowledge that this technology is still years away from actually being used in people's daily lives. 

When it comes to better algorithms that can more accurately decode brain signals, Seaton said that could see improvements in the near future. 

A person plugs wires into a box that sits on a person's head.
This is the port that is on Ann's head and it connects the electrodes in her brain to a series of computers. (UCSF)

But, Seaton says they would also like to see the device become wireless and portable — which will likely take much longer to become a reality. 

At this time, Seaton described the port in Ann's head as an "active wound site" that needs to be monitored. As a result, she says the technology is only able to be used in a laboratory setting with the support of a researcher. 

Upgrades to the device, along with regulatory approval, are likely more than five years away, Seaton estimates. 

Yalda Mohsenzadeh, an assistant computer science professor and member of the brain and mind institute at Western University in London, Ont., says she hopes that at some point wearable devices can be used on top of the scalp, so that surgery isn't required for the electrodes. 

A woman sits in front of a computer screen that has images of a brain behind her.
Yalda Mohsenzadeh is an assistant computer science professor and member of the brain and mind institute at Western University in London, Ont. (CBC)

Additionally, she pointed out that these devices need to show they can be used safely and reliably over a long period of time, across different types of people. 

"For technology like this to be realistically used it first needs to be addressed that it can work under all these variabilities that we have for individuals and between individuals," she said. 

Seaton says they are working to recruit more people with different brain injuries, to validate their findings across a larger group. 

As for Ann, she hopes her participation has helped move this field forward and that more advancements are just around the corner. 

"Hopefully one day, this just becomes something that is somewhat attainable for people that can't speak," said William. 

ABOUT THE AUTHOR

Jennifer La Grassa

Videojournalist

Jennifer La Grassa is a videojournalist at CBC Windsor. She is particularly interested in reporting on healthcare stories. Have a news tip? Email jennifer.lagrassa@cbc.ca

Add some “good” to your morning and evening.

A vital dose of the week's news in health and medicine, from CBC Health. Delivered to your inbox every Saturday morning.

...

The next issue of CBC Health's Second Opinion will soon be in your inbox.

Discover all CBC newsletters in the Subscription Centre.opens new window

This site is protected by reCAPTCHA and the Google Privacy Policy and Google Terms of Service apply.