Imagine your brain as a complex network that processes information in different levels of abstraction, just like a visual stream that starts with edge filters and transforms into object representations. This process is also observed in artificial neural networks (ANNs) trained for object recognition. However, traditional training algorithms like backpropagation are not biologically plausible. To address this, alternative methods like Equilibrium Propagation and Deep Feedback Control have been developed. One challenge is how a neuron can compare signals from different compartments. A possible solution is to let feedback signals change the neuron’s firing rate and combine it with a differential Hebbian update, which is a modified version of a classic learning rule in neuroscience called spiking time-dependent plasticity. This approach minimizes loss functions associated with error-based learning in machine learning. The implications of this research extend to other deep learning frameworks like Predictive Coding and Equilibrium Propagation. By removing a key requirement for deep learning models, this study sheds light on how our brains may implement hierarchical learning using temporal Hebbian learning rules.
