SAN FRANCISCO, Calif. — For the first time, scientists believe they've found a way to generate full spoken sentences based on brain activity, paving the way for technology that can potentially be used by people with speech disabilities.
That's according to new research from the University of California, San Francisco Weill Institute for Neuroscience, for which scientists inserted electrodes on five epilepsy patients, recorded them reading 101 sentences aloud and documented how areas of the brain involved in language responded.
They then mapped how individuals’ vocal tracts moved as they spoke, creating a simulated vocal tract for each participant.
Because "there are about 100 muscles used to produce speech, and they are controlled by a combination of neurons firing at once," according to New Scientist, "it's not as simple as mapping signals from one electrode to one muscle to sort out what the brain is telling the mouth to do." That's why scientists designed machine learning algorithms to detect brain activity and ultimately produce speech similar to the participant's voice.
The next step was to test speech comprehension. To do this, researchers played the new machine-produced voices to 1,755 native English speakers and asked them to transcribe what they heard.
Getting closer to mind reading w tech!
— 🤓✌🖖 (@NicoMcLane) April 25, 2019
The Machine That Reads Your Mind (Kinda) and Talks (Sorta) https://t.co/wnIGWEygLh via @WIRED
According to the study, published Wednesday in the journal Nature Neuroscience, the listeners transcribed 43% of the trials perfectly and were able to understand 69% of words spoken on average.
"We still have a ways to go to perfectly mimic spoken language," UCSF researcher Josh Chartier told Newsweek. "We're quite good at synthesizing slower speech sounds like 'sh' and 'z' as well as maintaining the rhythms and intonations of speech and the speaker's gender and identity, but some of the more abrupt sounds like 'b's and 'p's get a bit fuzzy."
@digitaljournal
— UnicornIndia (@unicornindia) March 12, 2018
A new algorithm paves the way for potential 'brain-reading'. This is through machine learning technology which can identify musical pieces from fMRI scans of the listener https://t.co/g1NQ0J3S6Y
On the Applicability of Brain Reading for Predictive Human-Machine Interfaces in Robotics http://t.co/LK4k8w8y6g pic.twitter.com/3cMJUzwxxE
— PLOS ONE (@PLOSONE) December 16, 2013
Though the two-step process, which involves electrodes to detect brain movement and computer algorithms to reproduce speech, isn't ready for clinical settings, the accuracy produced by their artificial encoder is a significant improvement compared to what's currently available and may prove useful for people who were once able to speak but lost the ability, commonly caused by conditions like Lou Gehrig's disease, autism, some cancers, dementia and other neurological disorders. This is because the device depends on control motor functions, which are still sent to the brain even if an individual is paralyzed.
"People who can't move their arms and legs have learned to control robotic limbs with their brains," Chartier said. "We are hopeful that one day people with speech disabilities will be able to learn to speak again using this brain-controlled artificial vocal tract."
Tre Azam developed brain reading machine to treat mental health issues. #DS15 pic.twitter.com/Vn2e41pDNs
— Jamie Stantonian (@jamiestantonian) May 13, 2015
#MotivationMonday SUUUUPER cool brain-reading walking machine for kids http://t.co/P2WOIt9TZN pic.twitter.com/u34KlXLGba
— University of Houston (@UHouston) August 17, 2015
Cox Media Group