AI technology could soon read your mind, literally – Doha News

Researchers at giant tech companies are working on new ways to better understand people’s minds, but to what extent?

Have you ever wondered what would happen if someone could read your mind? Well, several researchers from different institutions are working with AI to do just that, each with their own method.

As technology continues to advance day by day, scientists may soon be able to read minds with such accuracy that they can potentially reveal what a person is thinking or seeing, and even why they may be prone to mental illnesses such as schizophrenia and depression.

In fact, similar technology is already being used to diagnose and treat various disorders by better understanding how brains work.

For example, it is currently being used to allow surgeons to plan how to operate on brain tumors while preserving as much good tissue as possible.

Additionally, it has allowed psychologists and neuroscientists to map connections between different areas of the brain and cognitive processes such as memory, language and vision.

But now experts are opening new doors for AI to continue to advance.

Reading brain waves

The latest attempt was by Meta, Facebook’s parent company, and revolved mainly around hearsay.

I don’t spray

On August 31, experts at the company revealed that their newly developed artificial intelligence (AI) can “hear” what people hear simply by analyzing their brain wave activity.

The research, which is currently in its very early stages, aims to serve as a building block for technology that could help those with traumatic brain injuries who cannot communicate verbally or via a keyboard. If successful, scientists could capture brain activity without surgically inserting electrodes into the brain to record it.

“There are a bunch of conditions, from traumatic brain injury to anoxia [an oxygen deficiency], which effectively render people unable to communicate. And one of the pathways that has been identified for these patients over the past few decades is brain-computer interfaces,” Jean Remy King, a researcher at Facebook’s Artificial Intelligence Research Lab (FAIR), told TIME.

One way scientists have used to ensure communication is by placing an electrode on the motor areas of the patient’s brain, King explained. However, such a technique can be highly invasive, which is why his team is working to adopt other “safer” methods.

“Therefore, we set out to test using non-invasive recordings of brain activity. The goal was to create an AI system that could decode the brain’s responses to the stories being told.

As part of the experiment, researchers had 169 healthy adults listen to stories and words read aloud while wearing various equipment (such as electrodes taped to their heads) to monitor their brain activity.

In an attempt to uncover patterns, the researchers then fed the data into an AI model. Based on the electrical and magnetic activity in the participants’ brains, they wanted the algorithm to “hear” or detect what the participants were listening to.

While the experiment was successful as a starting point, King said his team ran into two main challenges, accuracy and sharpness.

“The signals we pick up from brain activity are extremely ‘noisy.’ The sensors are quite far from the brain. There’s a skull, there’s skin that can damage the signal we can pick up. So capturing them with a sensor requires super advanced technology,” he added.

The other challenge, the expert said, is understanding how the brain represents language to a large extent. He said that “even if we had a very clear signal, without machine learning, it would be very difficult to say, ‘Okay, this brain activity means this word, or this phoneme, or an action intention, or whatever.’

The goal of the next step, King added, is to learn how to match the representation of speech and the representation of brain activity and assign both to an artificial intelligence system.

Mind reading

While Meta’s research revolves around reading brain waves, researchers at Radboud University in the Netherlands are aiming for a more visual result.

According to Nature, the world’s leading multidisciplinary science journal, experts are working on “mind-reading” technology that will allow them to take pictures of a person’s brain waves.

To test the AI ​​technology, the researchers showed volunteers pictures of random faces while they were in a functional magnetic resonance imaging (fMRI) scan, a type of non-invasive brain imaging device that measures variations in blood flow to identify brain activity.

The fMRI then monitored the activity of neurons in brain regions associated with the volunteers’ vision as they viewed the images of faces. The data was then fed to the artificial intelligence (AI) program, allowing it to create an accurate image using the data from the fMRI scan.

The findings of the experiment showed that the fMRI/AI system was able to almost recreate the original visual images that were shown to the volunteers exactly.

petapixel

The study’s principal investigator, Tirza Dado, an AI researcher and cognitive neuroscientist, told Mail Online that these “impressive” results indicate the possibility of fMRI/AI systems successfully reading minds in the future.

“I believe we can train the algorithm to not only render an exact face that you’re looking at, but any face that you vividly imagine, like your mother’s,” Dado explained.

“By developing this technology, it would be fascinating to decode and recreate subjective experiences, maybe even your dreams,” says Dado. “Such technological knowledge can also be incorporated into clinical applications, such as communicating with patients who are locked in a deep coma.”

The expert said the research focuses on creating technology that can help those who have lost their sight due to disease or accident to regain it.

“We are already developing brain implant cameras that will stimulate people’s brains so they can see again,” Dado added.

The volunteers were previously exposed to various other faces while their brains were scanned to “train” the AI ​​system.

The key, according to the experiment, is that the photographic images they saw were not of actual people at all, but rather computer-generated, paint-by-numbers images, where each tiny dot of light or darkness was assigned a special computer program code.

The volunteers’ neural responses to these “training” images can be seen using fMRI scans. The pictorial portrait was then recreated by the artificial intelligence system by translating each volunteer’s neural response back into computer code.

Leave a Comment