SAN FRANCISCO: Scientists have developed a tool that decodes brain signals for the speech-related movements of the jaw, larynx, lips and tongue and synthesizes the signals into computerized speech, according to a small study published in Nature.
The study was conducted in research participants with intact speech, but the technology could one day restore the voices of people who have lost the ability to speak due to paralysis and other forms of neurological damage.Stroke, traumatic brain injury, and neurodegenerative diseases such as Parkinson’s disease, multiple sclerosis and amyotrophic lateral sclerosis (ALS, or Lou Gehrig’s disease) often result in an irreversible loss of the ability to speak.
Some people with severe speech disabilities learn to spell out their thoughts letter-by-letter using assistive devices that track very small eye or facial muscle movements.
However, producing text or synthesized speech with such devices is laborious, error-prone, and painfully slow, typically permitting a maximum of 10 words per minute, compared to the 100 to 150 words per minute of natural speech.
Electrodes on the brain have been used to translate brainwaves into words spoken by a computer – which could be useful in the future to help people who have lost the ability to speak.
The new system being developed in the laboratory of Edward Chang, M.D. demonstrates that it is possible to create a synthesized version of a person’s voice that can be controlled by the activity of their brain’s speech centers.In the future, this approach could not only restore fluent communication to individuals with a severe speech disability, the authors say, but could also reproduce some of the musicality of the human voice that conveys the speaker’s emotions and personality.
“For the first time, this study demonstrates that we can generate entire spoken sentences based on an individual’s brain activity,” said Edward Chang.