Robust and efficient training on deep spiking neural networks
File(s)
Author(s)
Perez Nieves, Nicolas
Type
Thesis or dissertation
Abstract
This thesis focuses on the study of training deep spiking neural networks (SNNs).
In recent years there has been an increasing interest in using spiking neurons for
deep learning with the aim of leveraging their unique properties and characteristics.
These include their potential for very energy efficient training and inference due to
their highly sparse activity and their suitability to model biological neurons.
We first introduce the fundamental models that are used in the SNN training literature,
including neuron models at different levels of abstraction, synapses and
networks of neurons and neurons under noise. We also review the main SNN training
methods that have been developed to this date. Then, we study the role of
neural heterogeneity and study the performance and robustness of SNNs under different
heterogeneity schemes on two different supervised learning methods. Next,
we show how the sparse activity present in the forward pass on SNNs can also be
achieved in the backward pass, leading to highly efficient implementations that can
speed up the backward pass up to 150x and save 85% of the memory. Finally, we
aim to solve the weight initialisation problem for SNNs to achieve a predictable
network activity as well as prevent the gradient from vanishing or exploding. In
this process, we identify and solve the firing rate collapse issue caused by the discretisation
of SNNs for simulation. In addition, we obtain theoretical and empirical
results for a general SNN initialisation strategy making use of variance propagation
and diffusion/shot-noise/threshold integration methods, as well as the solution to
the firing rate collapse problem we previously found.
Besides the ideas and experiments discussed in this thesis, code for the methods
described here can be found in https://github.com/npvoid.
In recent years there has been an increasing interest in using spiking neurons for
deep learning with the aim of leveraging their unique properties and characteristics.
These include their potential for very energy efficient training and inference due to
their highly sparse activity and their suitability to model biological neurons.
We first introduce the fundamental models that are used in the SNN training literature,
including neuron models at different levels of abstraction, synapses and
networks of neurons and neurons under noise. We also review the main SNN training
methods that have been developed to this date. Then, we study the role of
neural heterogeneity and study the performance and robustness of SNNs under different
heterogeneity schemes on two different supervised learning methods. Next,
we show how the sparse activity present in the forward pass on SNNs can also be
achieved in the backward pass, leading to highly efficient implementations that can
speed up the backward pass up to 150x and save 85% of the memory. Finally, we
aim to solve the weight initialisation problem for SNNs to achieve a predictable
network activity as well as prevent the gradient from vanishing or exploding. In
this process, we identify and solve the firing rate collapse issue caused by the discretisation
of SNNs for simulation. In addition, we obtain theoretical and empirical
results for a general SNN initialisation strategy making use of variance propagation
and diffusion/shot-noise/threshold integration methods, as well as the solution to
the firing rate collapse problem we previously found.
Besides the ideas and experiments discussed in this thesis, code for the methods
described here can be found in https://github.com/npvoid.
Version
Open Access
Date Issued
2023-02
Date Awarded
2023-06
Copyright Statement
Creative Commons Attribution NonCommercial NoDerivatives Licence
Advisor
Goodman, Daniel F.M.
Sponsor
Engineering and Physical Sciences Research Council
Grant Number
EP/L016796/1
Publisher Department
Electrical and Electronic Engineering
Publisher Institution
Imperial College London
Qualification Level
Doctoral
Qualification Name
Doctor of Philosophy (PhD)