Cortical Hierarchies and Learning Mechanisms Explored

Published on May 24, 2023

Imagine your brain as a complex network that processes information in different levels of abstraction, just like a visual stream that starts with edge filters and transforms into object representations. This process is also observed in artificial neural networks (ANNs) trained for object recognition. However, traditional training algorithms like backpropagation are not biologically plausible. To address this, alternative methods like Equilibrium Propagation and Deep Feedback Control have been developed. One challenge is how a neuron can compare signals from different compartments. A possible solution is to let feedback signals change the neuron’s firing rate and combine it with a differential Hebbian update, which is a modified version of a classic learning rule in neuroscience called spiking time-dependent plasticity. This approach minimizes loss functions associated with error-based learning in machine learning. The implications of this research extend to other deep learning frameworks like Predictive Coding and Equilibrium Propagation. By removing a key requirement for deep learning models, this study sheds light on how our brains may implement hierarchical learning using temporal Hebbian learning rules.

A key driver of mammalian intelligence is the ability to represent incoming sensory information across multiple abstraction levels. For example, in the visual ventral stream, incoming signals are first represented as low-level edge filters and then transformed into high-level object representations. Similar hierarchical structures routinely emerge in artificial neural networks (ANNs) trained for object recognition tasks, suggesting that similar structures may underlie biological neural networks. However, the classical ANN training algorithm, backpropagation, is considered biologically implausible, and thus alternative biologically plausible training methods have been developed such as Equilibrium Propagation, Deep Feedback Control, Supervised Predictive Coding, and Dendritic Error Backpropagation. Several of those models propose that local errors are calculated for each neuron by comparing apical and somatic activities. Notwithstanding, from a neuroscience perspective, it is not clear how a neuron could compare compartmental signals. Here, we propose a solution to this problem in that we let the apical feedback signal change the postsynaptic firing rate and combine this with a differential Hebbian update, a rate-based version of classical spiking time-dependent plasticity (STDP). We prove that weight updates of this form minimize two alternative loss functions that we prove to be equivalent to the error-based losses used in machine learning: the inference latency and the amount of top-down feedback necessary. Moreover, we show that the use of differential Hebbian updates works similarly well in other feedback-based deep learning frameworks such as Predictive Coding or Equilibrium Propagation. Finally, our work removes a key requirement of biologically plausible models for deep learning and proposes a learning mechanism that would explain how temporal Hebbian learning rules can implement supervised hierarchical learning.

Read Full Article (External Site)

Leave a Reply

Your email address will not be published. Required fields are marked *

You may use these HTML tags and attributes:

<a href="" title=""> <abbr title=""> <acronym title=""> <b> <blockquote cite=""> <cite> <code> <del datetime=""> <em> <i> <q cite=""> <s> <strike> <strong>