Researchers reconstructed Pink Floyd’s song ‘Another Brick in the Wall, Part 1’ from the brain activity of patients undergoing epilepsy surgery.
The phrase ‘All in all, it’s just another brick in the wall’ comes through in the audio file, marking the first time researchers have reconstructed a recognisable song from brain recordings.
To achieve this, researchers at the University of California, Berkeley, captured the electrical activity recorded by 2,668 electrodes placed on 29 patients’ brains while they listened to the 1979 rock song and used nonlinear modelling to reconstruct it.
The findings could potentially be used to improve devices help people with speech difficulties.
“It’s a wonderful result,” said Robert Knight, a neurologist and UC Berkeley professor of psychology. “As this whole field of brain-machine interfaces progresses, this gives you a way to add musicality to future brain implants for people who need it, someone who’s got ALS [amyotrophic lateral sclerosis] or some other disabling neurological or developmental disorder compromising speech output.
“It gives you an ability to decode not only the linguistic content, but some of the prosodic content of speech, some of the affect. I think that’s what we’ve really begun to crack the code on.”
The reconstruction demonstrates that it is possible to capture the musical elements of speech – rhythm, stress, accent and intonation – from brainwaves.
https://www.youtube.com/watch?v=WKEkJAKlRM0
The technique also identifies a new brain region necessary for perceiving musical rhythm, which could be used by future brain-machine interfaces, to recreate people’s voices.
In contrast, the technology used today to help people with aphasia due to stroke or brain damage communicate can decode words, but the sentences produced have a robotic quality, similar to how the late Stephen Hawking sounded when he used a speech-generating device.
“Right now, the technology is more like a keyboard for the mind,” said Ludovic Bellier, the study’s lead researcher. “You can’t read your thoughts from a keyboard. You need to push the buttons. And it makes kind of a robotic voice; for sure there’s less of what I call expressive freedom.”
Bellier stressed that his team’s findings went beyond a black box that could synthesise speech. He and his colleagues were also able to pinpoint new areas of the brain involved in detecting rhythm and discovered that some portions of the auditory cortex – in the superior temporal gyrus, located just behind and above the ear – respond at the onset of a voice or a synthesiser, while other areas respond to sustained vocals.
“Language is more left brain,” Knight said. “Music is more distributed, with a bias toward right.”
In 2012, Knight, postdoctoral fellow Brian Pasley and their colleagues were the first to reconstruct the words a person was hearing from recordings of brain activity alone. These findings led the researcher to aim to create a predictive model for music that was able to include elements such as pitch, melody, harmony and rhythm.
“Let’s hope, for patients, that in the future we could, from just electrodes placed outside on the skull, read activity from deeper regions of the brain with a good signal quality,” Bellier said. “But we are far from there.”
The team’s findings have been published in the open-access journal PLOS Biology.
Sign up to the E&T News e-mail to get great stories like this delivered to your inbox every day.