Efficient Training Algorithm for Spiking Neural Networks

Published on June 27, 2023

Just like learning how to ride a bike, training spiking neural networks can be a complex and time-consuming process. But fear not! Scientists have come up with an efficient and scalable algorithm that makes the training process faster and more resource-friendly. It’s like upgrading from a regular bicycle to a high-speed electric bike! Using optimized CPU and GPU implementations of the recursive least-squares algorithm, researchers were able to train networks with millions of neurons and billions of synapses in record time. In fact, the GPU implementation was a whopping 1,000 times faster than the unoptimized CPU version! With this algorithm, scientists can now simulate the complex computations performed by the nervous system in a more interactive and realistic manner. They can even train models while in-vivo experiments are being conducted, bridging the gap between modeling and real-world observations. If you’re curious to dive deeper into the research, check out the full article!

Training spiking recurrent neural networks on neuronal recordings or behavioral tasks has become a popular way to study computations performed by the nervous system. As the size and complexity of neural recordings increase, there is a need for efficient algorithms that can train models in a short period of time using minimal resources. We present optimized CPU and GPU implementations of the recursive least-squares algorithm in spiking neural networks. The GPU implementation can train networks of one million neurons, with 100 million plastic synapses and a billion static synapses, about 1,000 times faster than an unoptimized reference CPU implementation. We demonstrate the code’s utility by training a network, in less than an hour, to reproduce the activity of > 66, 000 recorded neurons of a mouse performing a decision-making task. The fast implementation enables a more interactive in-silico study of the dynamics and connectivity underlying multi-area computations. It also admits the possibility to train models as in-vivo experiments are being conducted, thus closing the loop between modeling and experiments.

Read Full Article (External Site)

Leave a Reply

Your email address will not be published. Required fields are marked *

You may use these HTML tags and attributes:

<a href="" title=""> <abbr title=""> <acronym title=""> <b> <blockquote cite=""> <cite> <code> <del datetime=""> <em> <i> <q cite=""> <s> <strike> <strong>