Tech UPTechnologyThey create a machine that transforms thoughts into speech

They create a machine that transforms thoughts into speech

Neurological conditions or injuries that lead to the inability to communicate can be devastating. Patients with such a loss of speech often rely on alternative communication devices that use brain-computer interfaces or non-verbal head or eye movements to control a cursor to spell words, as in the case of physicist Stephen Hawking. While these systems can improve quality of life, they can only produce 5-10 words per minute, much slower than the natural rate of human speech.


Now, a team of researchers from the University of California, San Francisco have published in the journal
Nature the details of a neural decoder that can transform brain activity into intelligible synthesized speech at a speed consistent with ordinary speech.

“It has been a goal of our laboratory for many years to create technology to restore communication for patients with severe speech disabilities ,” explains neurosurgeon Edward Chang, leader of the work. “We want to create technologies that can generate a synthesized speech directly from the activity of the human brain. This study provides a proof of principle that this is possible . “

The scientists developed a method to synthesize speech using brain signals related to the movements of a patient’s jaw, larynx, lips and tongue. To achieve this, they recorded high-density electrocorticography signals from five participants undergoing intracranial monitoring for the treatment of epilepsy. They tracked the activity of the brain areas that control speech and articulatory movement as the volunteers spoke several hundred sentences.

To reconstruct speech, rather than transforming brain signals directly into audio signals, the researchers used a two-stage approach. First, they designed a recurrent neural network that decoded neural signals into vocal tract movements. These movements were then used to synthesize speech. “We showed that using brain activity to control a computer-simulated version of the participant’s vocal tract allowed us to generate synthetic speech with a more natural and precise sound than trying to directly extract speech sounds from the brain, ” Chang clarifies.


A clear and understandable speech

To assess the intelligibility of synthesized speech, the researchers performed listening tasks based on single word identification and sentence-level transcription. In the first task, which evaluated 325 words , they found that listeners identified words better as syllable length increased and the number of word choices (10, 25, or 50) decreased, consistent with the natural perception of the word. speaks.

 

For the sentence-level tests, listeners listened to synthesized sentences and transcribed what they heard by selecting words from a defined group (of 25 or 50 words), including target and random words. In the 101 sentence tests, at least one listener was able to provide a perfect transcription for 82 sentences with a 25 word group and 60 sentences with a 50 word group. Transcribed sentences had a mean error rate of 31% with a group size of 25 words and 53% with a group of 50 words.

“This level of intelligibility for neurologically synthesized speech would already be immediately meaningful and practical for real-world application ,” the authors write.

Reestablishing communication

While the above tests were conducted on normal speaking subjects, the team’s primary goal is to create a device for people with communication disabilities. To simulate a configuration in which the patient cannot vocalize, the experts tested their decoder in a language that was silently imitated.

For this, the participants were asked to pronounce sentences and then imitate them, making the same articulatory movements with their mouth but without sound. “Then we ran our speech decoder to decode these neural recordings, and we were able to generate speech,” explains study co-author Josh Chartier. “It was really amazing that we could still generate audio signals from an act that didn’t create sound at all.”

So how can a person who cannot speak be trained to use the device?

“If someone can’t speak, then we don’t have a speech synthesizer for that person,” says Gopala Anumanchipalli, first author of the study. “We have used a speech synthesizer trained on one subject and driven by the neural activity of another subject. We have shown that this can be possible . “

The team now has two goals. “First, we want to improve the technology, make it more natural, more intelligible.” The other challenge is determining whether the same algorithms used for people with normal speech will work in a population that cannot speak, a question that may require a clinical trial to answer. Likewise, the finding is a first step for people who have lost their speech due to degenerative diseases to be able to recover it.

Referencia: Speech synthesis from neural decoding of spoken sentences. Gopala K. Anumanchipalli, Josh Chartier & Edward F. Chang. Nature volume 568, pages493–498 (2019) DOI: https://doi.org/10.1038/s41586-019-1119-1

 

Slaves and Disabled: Forced Medical Test Volunteers

The main problem to carry out medical research is to have willing volunteers for it. And if they come out for free, much better. This is the story of unethical behavior in medical research.

How are lightning created?

Summer is synonymous with sun, but also with storms. Who has not contemplated one from the protection that the home gives that electrical display that is lightning?

How global warming will affect astronomy

Astronomical observations around the world will worsen in quality as a result of climate change, according to a new study.

New images of Saturn's rings in stunning detail

New images of Saturn's rings in stunning detail

NASA discovers more than 50 areas that emit exorbitant levels of greenhouse gases

NASA's 'EMIT' spectrometer locates has targeted Central Asia, the Middle East and the US among others.

More