Connect with us

Health

Is telepathy possible? Maybe because of new technology

Avatar

Published

on

Is telepathy possible?  Maybe because of new technology

This story is part of a series about current advances in regenerative medicine. This piece reviews advances in brain-machine interfaces.

In 1999, I defined regenerative medicine as the set of interventions that restore normal function to tissues and organs damaged by disease, injured by trauma, or worn down by time. I include a full spectrum of chemical, gene and protein-based medicines, cell-based therapies and biomechanical interventions that achieve that goal.

In the mid-20th century, AE van Vogt wrote about the possibilities of telepathy in his work To beat. In his writing, mutations in the human germline resulted in a human-derived species capable of telepathy. Thanks to new technology, such progress in the human species may be on the brink of what is possible.

Telepathy is the translation of thoughts into transmissible signals without sound that can be received and understood by the computer or by others. Emerging neural decoding technologies will soon enable highly accurate translation of thoughts into written or spoken language. In a recent study for NatureDr. Xupeng Chen and colleagues from New York University thoughts wirelessly translated into words. This technology could revolutionize the way we approach brain-machine communication in the coming years.

In previous months I have described several brain-machine interface technologies that integrate speech translation technologies. However, most are limited in efficacy or overbearing in their form factor. While these innovations are remarkable, they are still far from being implementable in the modern world.

Enter Chen and colleagues’ experiments, in which they implanted electrodes directly into the brain to record electrical activity, a method known as electrocorticography. It offers one of the highest level revolutions in recording that we can muster, especially compared to non-invasive methods.

Chen and colleagues used a cohort of epilepsy cases already fitted with the required implant. They noted that their speech decoder faced two major challenges. First, deep-learning AI models for speech decoding would require a significant data set, which did not exist at the time of their research. Second, individual speech production varies considerably in rate, tone, pitch, and other factors. It would be challenging to reconcile these issues in a general-purpose speech decoder.

Their system included two main functions: neural decoding and speech synthesis. To learn the synthesizer, the researchers fed spoken language to a speech encoder, which was analyzed for all the factors discussed above, such as tone and pitch. They then entered that information into the synthesizer, creating a database of speech parameters.

The decoder was learned in a similar way. It was fed small snippets of neural data to create a database from which the AI ​​could derive longer neural signals in the future.

They also equipped their decoding model with three deep learning architectures: ResNet, Swin Transformer and LSTM. Furthermore, any learning architecture comes in two forms: causal and non-causal. Causal models use only past and present neural signals to generate speech, while non-causal models use past, present and future neural signals. Future cues come in the form of auditory and speech feedback that is not available in a real-time application, meaning that while non-causal may be more accurate than causal, it is not as relevant to a real-time form of speech decoding.

After training the decoder system and early testing on the epilepsy cohort, the researchers found that the causal versions of ResNet and Swin Transformer were the most accurate architectures for the decoder and focused on these for further analyses.

Using their speech decoder on a series of subjects, the researchers made three observations, two of which came as a surprise and a third as a confirmation of previous assumptions. One surprise was that the differences between left and right brain decoding were minimal. Although early trends suggested that the speech-dominant left hemisphere would be more effective at decoding from person to person, the data did not support this assumption. This can be useful if the person being decoded has brain damage or impairment isolated to one side of the brain.

A second surprise was that the researchers found that electrode density was less important for decoding accuracy than once thought. Although it was suggested that higher density implants would provide more accurate neural signals and therefore more accurate speech translations, the differences between high and low density implants were minimal. This is exciting because low-density electrode implants could be much cheaper and make the decoding system much more accessible if it reached a commercial market.

Third, the researchers examined which cortical regions were most relevant for speech decoding. As suspected, it was confirmed that the sensorimotor cortex was the most involved brain region, especially in the ventral part. However, the researchers found that both left and right brain activation was similar in these regions, highlighting the potential of right brain neural prostheses that were previously considered less optimal.

While a highly accurate speech decoder is a remarkable creation, the future implications of such research are much more exciting.

Most relevant to this study, in particular, is that the researchers have made their neural decoding pipeline publicly available. This means that their blueprint and foundation can be used in future developments to streamline progress in the field. We encourage everyone in this area to follow similar utility routes in the future.

Perhaps more speculative, but certainly a possibility, this technology will pave the way for the eventual wireless, implant-free translation of thoughts into speech or action. Science fiction stories have often reflected on telepathy, such as To beat by AE van Vogt. This kind of technology, although it once seemed like science fiction, could be another step on our path to this eventual future. I expect further progress in the field of brain-machine interfaces soon.

If you would like to read more of this series, please visit www.williamhaseltine.com