Unraveling algorithmic ‘amnesia’ can help understand how we learn

A discovery of how algorithms can learn and retain information more efficiently offers potential insight into the brain’s ability to absorb new knowledge. Findings by researchers at UC Irvine’s School of Biological Sciences may help fight cognitive impairment and improve technology. Their study appears in Proceedings of the National Academy of Sciences.


The scientists focused on artificial neural networks, known as ANNs, which are algorithms designed to emulate the behavior of brain neurons. Like human minds, ANNs can absorb and classify vast amounts of information. Unlike our brains, however, ANNs tend to forget what they already know when fresh knowledge is introduced too quickly, a phenomenon known as catastrophic forgetting.

Researchers have long theorized that our ability to learn new concepts stems from the interaction between the brain’s hippocampus and neocortex. The hippocampus captures fresh information and reproduces it during rest and sleep. The neocortex grabs the new material and reviews its existing knowledge so that it can interweave or layer the fresh material into similar categories developed from the past.

However, there are some questions about this process given the inordinate amount of time it would take the brain to sort through all the information it has gathered over its lifetime. This pitfall may explain why ANNs lose long-term knowledge when they assimilate new data too quickly.

Traditionally, the solution used in deep machine learning is to retrain the network on the entire set of past data, whether it is closely related to the new information or not, a time-consuming process. UCI scientists decided to investigate the problem more deeply and made a remarkable discovery.

“We found that when ANNs interleaved a much smaller subset of old information, including mainly elements that were similar to the new knowledge they were acquiring, they learned it without forgetting what they already knew,” the student said Rajat Saxena, first author of the article. Saxena led the project with the help of Justin Chaube, Assistant Project Scientist. Both are members of the laboratory of Bruce McNaughton, Distinguished Professor of Neurobiology and Behavior.

“This allowed ANNs to take in new information very efficiently without having to review everything they had previously acquired,” Saxena said. “These findings suggest a brain mechanism for why experts in something can learn new things in that domain much faster than non-experts. If the brain already has a cognitive framework associated with the new information, the new material can be absorbed more quickly because changes are needed only in the part of the brain network that encodes expert knowledge.

The finding has the potential to address cognitive issues, according to McNaughton. “Understanding the mechanisms behind learning is essential to making progress,” he said. “It gives us insight into what happens when brains don’t work the way they should. We can develop learning strategies for people with memory problems caused by aging or those with brain damage. It may also lead to the ability to manipulate brain circuits so that people can overcome these deficits.

The findings also offer opportunities for more precise and efficient algorithms in machines such as medical diagnostic equipment, autonomous cars and more.

Reference: Saxena Rajat, Shobe Justin L., McNaughton Bruce L. Training in deep neural networks and brains with similarity-weighted convolutional learning. Proc. Natl. Acad Sci. 2022;119(27):e2115229119. doi: 10.1073/pnas.2115229119

This article has been republished from the following materials. Note: Material may have been edited for length and content. For additional information, please contact the cited source.

Leave a Comment

Your email address will not be published.