Spiking Models for Time Series Classification with Energy Efficiency

Published on June 8, 2023

Imagine you have two cars, one is a sleek sports car that runs on a powerful and energy-hungry engine, while the other is a compact electric car that efficiently zips around town without wasting fuel. In the world of machine learning algorithms, there are similar distinctions. Many advanced algorithms achieve impressive results but consume a lot of energy by relying on energy-intensive CPUs and GPUs. However, there’s a promising alternative: computing with Spiking Networks on specialized neuromorphic hardware, which has been shown to be highly energy-efficient. Inspired by Reservoir Computing and Legendre Memory Units, the researchers in this study present two spiking models for Time Series Classification (TSC). The first model follows the general Reservoir Computing architecture, while the second model includes non-linearity in the readout layer. By training the second model using Surrogate Gradient Descent method, they not only achieve excellent results with nonlinear decoding of temporal features but also reduce computation overhead by significantly reducing the number of neurons compared to previous models. The experiments conducted on five TSC datasets demonstrate promising results, including accuracy improvement of up to 28.607%. This exciting research showcases the potential of these energy-efficient spiking models to tackle TSC tasks. To learn more about their methodology and results, dive into the full article!

A variety of advanced machine learning and deep learning algorithms achieve state-of-the-art performance on various temporal processing tasks. However, these methods are heavily energy inefficient—they run mainly on the power hungry CPUs and GPUs. Computing with Spiking Networks, on the other hand, has shown to be energy efficient on specialized neuromorphic hardware, e.g., Loihi, TrueNorth, SpiNNaker, etc. In this work, we present two architectures of spiking models, inspired from the theory of Reservoir Computing and Legendre Memory Units, for the Time Series Classification (TSC) task. Our first spiking architecture is closer to the general Reservoir Computing architecture and we successfully deploy it on Loihi; the second spiking architecture differs from the first by the inclusion of non-linearity in the readout layer. Our second model (trained with Surrogate Gradient Descent method) shows that non-linear decoding of the linearly extracted temporal features through spiking neurons not only achieves promising results, but also offers low computation-overhead by significantly reducing the number of neurons compared to the popular LSM based models—more than 40x reduction with respect to the recent spiking model we compare with. We experiment on five TSC datasets and achieve new SoTA spiking results (—as much as 28.607% accuracy improvement on one of the datasets), thereby showing the potential of our models to address the TSC tasks in a green energy-efficient manner. In addition, we also do energy profiling and comparison on Loihi and CPU to support our claims.

Read Full Article (External Site)

Leave a Reply

Your email address will not be published. Required fields are marked *

You may use these HTML tags and attributes:

<a href="" title=""> <abbr title=""> <acronym title=""> <b> <blockquote cite=""> <cite> <code> <del datetime=""> <em> <i> <q cite=""> <s> <strike> <strong>