Scientists at Stanford University, in the United States, have developed a new brain-computer interface (BCI) that can accurately decode the speech of a human being with an impressive speed of up to 62 words per minute.
- Neuroprosthesis allows patients with paralysis to say sentences by imagining them
- New ‘brain painting’ method could be used to treat ADHD
According to the researchers responsible for the project, this decoding capacity is 3.4 times greater than the previous record, enabling the creation of a speech conversion system in real time, at a pace similar to a natural human conversation.
“We found that we only needed to analyze brain activity in a relatively small region of the cortex to convert it into coherent speech, using only a machine learning algorithm as a basis”, explains Professor Francis Willett, lead author of the study.
The main objective of the scientists was to create a device that could restore the voice of people who lost the ability to speak naturally due to diseases such as amyotrophic lateral sclerosis (ALS) and cerebral palsy or suffered some type of accident.
Using a recurrent neural network decoder with the ability to predict text in real time, the researchers turned brain signals into words at an astonishingly fast pace, giving the patient the ability to communicate again.
“We demonstrated a speech BCI that can decode unrestricted sentences from a large vocabulary at a rate of 62 words per minute, far exceeding communication rates using alternative technologies such as eye tracking,” adds Willett.
in the real world
During tests carried out in the laboratory, the researchers recorded neural activity in two small areas of the brain of a patient with amyotrophic lateral sclerosis who could move his mouth but had great difficulty forming words.
They found that the Reviews of these orofacial movements, coupled with neural activity, was strong enough to support a speech brain-computer interface despite facial paralysis and narrow surface coverage of the cortex.
“Our demonstration is a proof of concept that decoding tentative speech movements from intracortical recordings is a promising approach, but we still need to improve the system’s error rate — approximately 20% — by probing other areas of the brain, in addition to optimize our algorithm,” concludes Willett.