Driving the Need for Speed in Neuronal Network Simulations

Published on May 11, 2022

Computational neuroscience is like building a complex puzzle. To understand how brains work, scientists create intricate network models and study their dynamics. But to make progress, we need to speed up the simulation process. That’s where benchmarking comes in. Just like test-driving cars, we test different simulations on various hardware and software setups to measure their speed and efficiency. However, comparing benchmark results can be challenging without standardized measures. To solve this problem, we introduce a modular workflow that breaks down benchmarking into separate segments, allowing for better comparability. And to put this workflow into action, we’ve created beNNch: an open-source software framework that configures, executes, and analyzes benchmark simulations for neuronal networks. With beNNch, we can identify performance bottlenecks and guide the development of faster simulation technology. So hop in and join us on this thrilling journey to unravel the mysteries of the brain!

Modern computational neuroscience strives to develop complex network models to explain dynamics and function of brains in health and disease. This process goes hand in hand with advancements in the theory of neuronal networks and increasing availability of detailed anatomical data on brain connectivity. Large-scale models that study interactions between multiple brain areas with intricate connectivity and investigate phenomena on long time scales such as system-level learning require progress in simulation speed. The corresponding development of state-of-the-art simulation engines relies on information provided by benchmark simulations which assess the time-to-solution for scientifically relevant, complementary network models using various combinations of hardware and software revisions. However, maintaining comparability of benchmark results is difficult due to a lack of standardized specifications for measuring the scaling performance of simulators on high-performance computing (HPC) systems. Motivated by the challenging complexity of benchmarking, we define a generic workflow that decomposes the endeavor into unique segments consisting of separate modules. As a reference implementation for the conceptual workflow, we develop beNNch: an open-source software framework for the configuration, execution, and analysis of benchmarks for neuronal network simulations. The framework records benchmarking data and metadata in a unified way to foster reproducibility. For illustration, we measure the performance of various versions of the NEST simulator across network models with different levels of complexity on a contemporary HPC system, demonstrating how performance bottlenecks can be identified, ultimately guiding the development toward more efficient simulation technology.

Read Full Article (External Site)

Leave a Reply

Your email address will not be published. Required fields are marked *

You may use these HTML tags and attributes:

<a href="" title=""> <abbr title=""> <acronym title=""> <b> <blockquote cite=""> <cite> <code> <del datetime=""> <em> <i> <q cite=""> <s> <strike> <strong>