A Neuro-Inspired Model of Lifelong Learning and Memory Protection.

Published on September 9, 2022

Imagine your brain as a master learner, constantly acquiring new information while guarding the memories of past experiences. Scientists have developed a computational model inspired by the brain’s neural networks, specifically the hippocampus and its surrounding regions, to shed light on how this remarkable lifelong learning ability works. This model incorporates two key mechanisms: one involving dopamine, a neurotransmitter that enhances plasticity when triggered by novel stimuli, and another involving inhibitory connections among neurons in the hippocampus. These mechanisms work together to protect previously encoded memories from being disrupted when learning new information. By testing the model with various image datasets, researchers found that it effectively mitigates catastrophic interference, allowing for improved learning of new stimuli without erasing prior knowledge. This study opens up exciting possibilities for further exploration through animal experiments and may also inspire advancements in machine learning algorithms.

The human brain has a remarkable lifelong learning capability to acquire new experiences while retaining previously acquired information. Several hypotheses have been proposed to explain this capability, but the underlying mechanisms are still unclear. Here, we propose a neuro-inspired firing-rate computational model involving the hippocampus and surrounding areas, that encompasses two key mechanisms possibly underlying this capability. The first is based on signals encoded by the neuromodulator dopamine, which is released by novel stimuli and enhances plasticity only when needed. The second is based on a homeostatic plasticity mechanism that involves the lateral inhibitory connections of the pyramidal neurons of the hippocampus. These mechanisms tend to protect neurons that have already been heavily employed in encoding previous experiences. The model was tested with images from the MNIST machine learning dataset, and with more naturalistic images, for its ability to mitigate catastrophic interference in lifelong learning. The results show that the proposed biologically grounded mechanisms can effectively enhance the learning of new stimuli while protecting previously acquired knowledge. The proposed mechanisms could be investigated in future empirical animal experiments and inspire machine learning models.

Read Full Article (External Site)

Leave a Reply

Your email address will not be published. Required fields are marked *

You may use these HTML tags and attributes:

<a href="" title=""> <abbr title=""> <acronym title=""> <b> <blockquote cite=""> <cite> <code> <del datetime=""> <em> <i> <q cite=""> <s> <strike> <strong>