Efficient Large-Scale SNN Simulation: Comparing GeNN and NEST

Published on February 10, 2023

Simulating large-scale spiking neural networks (SNNs) is like juggling a complex set of parameters to create a mesmerizing circus act. To achieve optimal network function, these SNNs require systematic calibration of multiple free model parameters, demanding high computing power and large memory resources. In this study, two approaches are compared: the NEural Simulation Tool (NEST), which parallelizes simulation across CPU cores, and the GPU-enhanced Neural Network (GeNN) simulator that uses GPU-based architecture for faster simulation. The results show that simulation time scales linearly with the biological model time and approximately linearly with the model size. GeNN demonstrates impressive simulation capabilities with up to 3.5 million neurons on a high-end GPU and real-time simulation with 100,000 neurons. Both approaches have their advantages and disadvantages, depending on the specific use case. For aspiring neuroscientists and simulation enthusiasts, this research opens up exciting possibilities for exploring the world of SNNs and unlocking the secrets of the nervous system!

Spiking neural networks (SNNs) represent the state-of-the-art approach to the biologically realistic modeling of nervous system function. The systematic calibration for multiple free model parameters is necessary to achieve robust network function and demands high computing power and large memory resources. Special requirements arise from closed-loop model simulation in virtual environments and from real-time simulation in robotic application. Here, we compare two complementary approaches to efficient large-scale and real-time SNN simulation. The widely used NEural Simulation Tool (NEST) parallelizes simulation across multiple CPU cores. The GPU-enhanced Neural Network (GeNN) simulator uses the highly parallel GPU-based architecture to gain simulation speed. We quantify fixed and variable simulation costs on single machines with different hardware configurations. As a benchmark model, we use a spiking cortical attractor network with a topology of densely connected excitatory and inhibitory neuron clusters with homogeneous or distributed synaptic time constants and in comparison to the random balanced network. We show that simulation time scales linearly with the simulated biological model time and, for large networks, approximately linearly with the model size as dominated by the number of synaptic connections. Additional fixed costs with GeNN are almost independent of model size, while fixed costs with NEST increase linearly with model size. We demonstrate how GeNN can be used for simulating networks with up to 3.5 · 106 neurons (> 3 · 1012synapses) on a high-end GPU, and up to 250, 000 neurons (25 · 109 synapses) on a low-cost GPU. Real-time simulation was achieved for networks with 100, 000 neurons. Network calibration and parameter grid search can be efficiently achieved using batch processing. We discuss the advantages and disadvantages of both approaches for different use cases.

Read Full Article (External Site)

Leave a Reply

Your email address will not be published. Required fields are marked *

You may use these HTML tags and attributes:

<a href="" title=""> <abbr title=""> <acronym title=""> <b> <blockquote cite=""> <cite> <code> <del datetime=""> <em> <i> <q cite=""> <s> <strike> <strong>