Mind-reading and non-verbal communication — no, seriously

Posted 8 months ago by David McManus
Share
Mind-reading and telepathic communication weren’t on the Talking Disability team’s 2023 bingo card… [Source: Shutterstock]
Mind-reading and telepathic communication weren’t on the Talking Disability team’s 2023 bingo card… [Source: Shutterstock]

The future of non-verbal communication and comprehension is within reach for researchers.

Key points:

  • Neuroscience researchers at the University of California, Berkeley, came one step closer to mind-reading after assessing the data from Albany Medical Centre, New York
  • The New York-based neuroscientists recorded brain activity in patients receiving epilepsy surgery as they listened to Pink Floyd’s ‘Another Brick in the Wall [part one]’
  • Following the initial recordings of 29 surgery patients in 2008 and 2015, artificial intelligence, commonly referred to as AI, helped West Coast researchers recreate what patients heard

 

Neuroscience researchers from the University of California, Berkeley, were able to reconstruct a three-minute excerpt of the Pink Floyd song ‘Another Brick in the Wall [part one],’ through the use of artificial intelligence.

The breakthrough research came a decade after Albany Medical Centre researchers attached electrodes to and recorded brain activity in 29 patients receiving epilepsy surgery. The recordings were taken through intracranial electroencephalography scans — which can only be made from the surface of the brain. Although it’s a far cry from the telepathic mind-reading powers of a super-villain, researchers were still able to recreate the phrase lyric “all in all, it was just a brick in the wall” with rhythm intact — a world first.

“It’s a wonderful result,” said Robert Knight, a neurologist and UC Berkeley professor of psychology at the Helen Wills Neuroscience Institute, who conducted the study with postdoctoral fellow Ludovic Bellier.

“One of the things for me about music is it has prosody and emotional content. As this whole field of brain-machine interfaces progresses, this gives you a way to add musicality to future brain implants for people who need it — someone who’s got [Lou Gehrig’s disease] or some other disabling neurological or developmental disorder compromising speech output.

“It gives you an ability to decode not only the linguistic content, but some of the prosodic content of speech, some of the affect. I think that’s what we’ve really begun to crack the code on.”

To date, the brain-machine interfaces used to detect speech for people with an inability to communicate are rather monotone and robotic, akin to the speech-generating device used by physicist Stephen Hawking.

“Right now, the technology is more like a keyboard for the mind,” Bellier said.

“You can’t read your thoughts from a keyboard. You need to push the buttons […] it makes kind of a robotic voice; for sure there’s less of what I call ‘expressive freedom.’”

Along with detecting how the human brain would respond to different types of sound depending on each region — such as sustained vocals versus thrumming guitar and synthesised sound — the research confirmed that the right side of the brain is more attuned to music than the left.

“Language is more left brain. Music is more distributed, with a bias toward right,” Professor Knight said.

The research served as a launching point for future investigation into how people with aphasia — difficulty with communication and comprehension, following a stroke or brain injury — may be able to sing rather than speak.

 

Check out the complete study, available at PLOS Biology or listen to the AI-generated reconstruction of Pink Floyd’s classic track via CBC Canada.

 

Let the team at Talking Disability know your thoughts on the role of tone and sound in communication.

Listen to the AI generated recreation of what patients heard

Credit to uploader: Euronews