Unlocking the Secrets of Maze Navigation Through Neural Replay!

Published on February 9, 2023

Imagine you’re in a maze, and you’re determined to find your way out. As you explore, your brain’s place cells activate, creating a mental map of your surroundings. But what happens when the maze changes? Recent research has discovered that during sleep or rest, the brain replays these place cell activations, allowing us to update our mental map and navigate through the ever-changing maze. However, existing computational models fall short when it comes to generating this flexible navigation. That’s where this groundbreaking study comes in! The researchers have developed a computational model that accurately simulates this neural replay and explains how it drives our ability to adapt to a changing maze. By using a Hebbian-like rule to learn the connections between place cells and a continuous attractor network (CAN) to simulate their interactions, they were able to demonstrate the exceptional flexibility of their model during maze navigation. Not only did the model mimic the re-learning of synaptic strength as we explore, but it also accurately planned paths based on previous experiences. These findings shed light on the complex mechanisms behind our ability to navigate and may even have implications for understanding memory and learning in general. If you’re curious how your brain tackles mazes, be sure to dive into the full research article!

Recent experimental observations have shown that the reactivation of hippocampal place cells (PC) during sleep or wakeful immobility depicts trajectories that can go around barriers and can flexibly adapt to a changing maze layout. However, existing computational models of replay fall short of generating such layout-conforming replay, restricting their usage to simple environments, like linear tracks or open fields. In this paper, we propose a computational model that generates layout-conforming replay and explains how such replay drives the learning of flexible navigation in a maze. First, we propose a Hebbian-like rule to learn the inter-PC synaptic strength during exploration. Then we use a continuous attractor network (CAN) with feedback inhibition to model the interaction among place cells and hippocampal interneurons. The activity bump of place cells drifts along paths in the maze, which models layout-conforming replay. During replay in sleep, the synaptic strengths from place cells to striatal medium spiny neurons (MSN) are learned by a novel dopamine-modulated three-factor rule to store place-reward associations. During goal-directed navigation, the CAN periodically generates replay trajectories from the animal’s location for path planning, and the trajectory leading to a maximal MSN activity is followed by the animal. We have implemented our model into a high-fidelity virtual rat in the MuJoCo physics simulator. Extensive experiments have demonstrated that its superior flexibility during navigation in a maze is due to a continuous re-learning of inter-PC and PC-MSN synaptic strength.

Read Full Article (External Site)

Leave a Reply

Your email address will not be published. Required fields are marked *

You may use these HTML tags and attributes:

<a href="" title=""> <abbr title=""> <acronym title=""> <b> <blockquote cite=""> <cite> <code> <del datetime=""> <em> <i> <q cite=""> <s> <strike> <strong>