AI Brain Activity Decoder Can Translate Thoughts Into Written Words

At least occasionally, a narrative that you imagine will materialize.

Though it may sound like something out of science fiction, a new artificial intelligence (AI) model created at the University of Texas at Austin has been successful in doing just that. The model may be trained to decipher sophisticated language from someone's thoughts for lengthy periods of time using simply noninvasive scanning techniques.

"For a noninvasive method, this is a real leap forward compared to what's been done before, which is typically single words or short sentences," research co-leader Alex Huth, an assistant professor of neurology and computer science, said in a release.

Other systems with a similar function are being developed, but this one stands out because participants don't have to have surgery to have implants fitted, and they aren't limited to a vocabulary list.

The model, known as a semantic decoder, is trained on hours of data from an individual while they listen to podcasts while having their brain scanned using functional magnetic resonance imaging (fMRI), using technology similar to that used in Open AI's ChatGPT and Google's Bard chatbots. With the participant's permission, the model will later produce a stream of text by decoding their thoughts when they are listening to a new narrative or envisioning telling one.

The results were as follows: While the decoder is unable to synthesis a person's ideas word for word, it frequently manages to catch the essence of what they are thinking. After extensive training, it is capable of producing text that, about half the time, accurately captures the author's thoughts.

The study extended beyond only hearing or contemplating stories. This video demonstrates what the model was able to interpret from a subject's brain activity while they were silently watching a movie clip:

Even if it's not ideal, the fact that it's all non-invasive is a major bonus. It's believed that in the future, technological advancements like these can benefit people who are no longer able to physically communicate through voice, such as certain stroke survivors.

However, you're not alone if looking at this type of technology makes you feel uneasy. A technology that can read your thoughts is more often the stuff of dystopian fears than science fiction for many people.

The study's co-lead and PhD student Jerry Tang addressed these understandable worries by stating, "We take very seriously the concerns that it may be utilized for negative ends and have endeavored to avoid that. We want to ensure that individuals only use these technologies when they want to and when they are beneficial to them.

The first practical issue is that this system must first undergo hours of training before it can start to function. Before this actually works on a person, Huth said, "they need to spend up to 15 hours lying in an MRI scanner, being perfectly still, and paying good attention to stories that they're listening to."

In addition, there is a failsafe: simply thinking of anything unrelated, like animals, even someone who had helped train the model might stop it from deciphering their inner speech.

Privacy and safety, however, remain a top priority as the researchers try to advance this technology. "I think it's important to be proactive by enacting policies that protect people and their privacy right now, while the technology is in such an early stage," said Tang. "Regulating the purposes for which these devices may be used is also crucial."

The study is published in Nature Neuroscience.