A Brief Review on Spiking Neural Network - A Biological Inspiration

Recent advancement of deep learning has been elevated the multifaceted nature in various applications of this ﬁeld. Ar-tiﬁcial neural networks are now turning into a genuinely old procedure in the vast area of computer science; the principal thoughts and models are more than ﬁfty years of age. However, in this modern computing era, 3rd generation intelligent models are introduced by scientists. In the biological neuron, actual ﬁlm channels control the progression of particles over the layer by opening and shutting in light of voltage changes because of inborn current ﬂows and remotely led to signals. A comprehensive 3rd generation, Spiking Neural Network (SNN) is diminishing the distance between deep learning, machine learning, and neuroscience into a biological-inspired manner. It also connects neuroscience and machine learning to establish high-level efﬁcient computing. Spiking Neural Networks initiate utilizing spikes, which are discrete functions that happen at focuses as expected, as opposed to constant values. This paper is a review of the biological-inspired spiking neural network and its applications in different areas. The author aims to present a brief introduction to SNN, which incorporates the mathematical structure, applications, and implementation of SNN. This paper also represents an overview of machine learning, deep learning, and reinforcement learning. This review paper can help advanced artiﬁcial intelligence researchers to get a compact brief intuition of spiking neural networks.


Introduction
An intricate structure of human brain, which consists 90 billion of neurons. It is internally connected with trillions of synapses. The human brain disposes of information between neurons by electrical motivations called spikes [1]. Neurons in the brain are subjectively not quite the same as an artificial neural network: They are dynamic schemes including recurrent conduct, as opposed to static non-linearities; and they measure and convey utilizing inadequate spiking signals after some time, instead of complex numbers. Spiking neural network is introduced by an inspiration of dynamic spiking neurons. SNNs have extraordinary potential for comprehending muddled timedependent pattern recognition by its dynamic spiking neurons [22]. However, it can process the encoded data according to the event of time [6]. SNN is a 3rd generation model which is predicted as a human brain structure. Neural models which are domination the first two generation before SNN appears, can process output signal between 0 and 1. These signals has a specific firing rate within a certain time arrangement [7,12]. Which is said to be rate-coding [15]. So as SNN opposed to utilizing rate coding, these neurons utilize pulse-based coding; components where neurons receive and convey a single pulse each time, permitting multiplexing of data as recurrence and adequacy of the sound [7]. Due to its binary representation of spikes, SNN is more relie on boundary configuration of its parameters [8]. Also, in the low terminating rate range in which the brain operates (1-10 Hz), spike-times can be used as an extra extent for computing. Although, such computational potential has never been investigated because of the absence of generalized learning techniques for SNNs [9]. SNN can overcome the drawbacks of heuristic methods on data clustering [12]. SNNs are intriguing a direct result of their capacity to learn in a circulated manner utilizing a strategy called Spike Timing Dependent Plasticity (STDP) learning [11]. SNN structure that can be initially mapped to specific kinds of spikebased neuromorphic equipment with less execution loss [18]. SNNs have extraordinary potential for comprehending convoluted time-dependent pattern recognition by its dynamic spiking neurons [22]. This review paper aims to introduce an elementary knowledge of spiking neural networks. The author mainly focuses on systems of spike-based data handling, transformation, and learning. This review paper also discovers different synaptic versatility rules utilized in SNN and discusses their properties with regards to traditional machine learning, that is: directed, unaided, and reinforcement learning. The author also presents a review of the ongoing utilization of spiking neurons in different fields, going from neurobiology to several fields of engineering. This paper is enhanced by a brief intro to spiking neural networks and its mathematical intuitions.
The primary aim of this work is to acquaint spiking neural organizations with a more extensive academic network. The author accepts the paper will be valuable for analysts working in the field of AI and keen on biomimetic neural techniques for quick data preparing and learning. This work will contribute them a study of such instruments and instances of uses where they have been utilized. Additionally, neuroscientists with an organic foundation may discover the paper helpful for understanding natural learning with regards to the AI hypothesis. Additionally, this paper will fill in as a prologue to the hypothesis and practice of spiking neural networks for all scientists Preprints (www.preprints.org) | NOT PEER-REVIEWED | Posted: 7 April 2021 doi:10.20944/preprints202104.0202.v1 keen on understanding the standards of spike-based neural procedure.

A Study of Machine, Deep and Reinforcement Learning
Machine learning is a sub-field of artificial intelligence. However, it is an intelligent technique to improve the learning experience of computer algorithms without having extensive supervising [24]. The initial purpose of machine learning to connect human and computer in a feasible way. Developing machine learning algorithms are mean to design for general applications [23]. Support vector machine, random forest, logistic regression and decision tree are extensively used machine learning algorithms in real time applications. Various real time applications such as information retrieval, medical diagnosis, natural language processing, classification and prediction are the key application precincts of machine learning [37]. In Figure 1, domain knowledge based machine learning feature selection is illustrated. Inherent machine learning classes are: Supervised Learning: In supervised learning, the training dataset has the same labeled input and output set. Which are correlated with each other. While training a machine learning algorithm, it searches for a specific pattern related to the dataset itself. After training, the upcoming new inputs are cross matched with the training dataset and determining the desired output corresponding to the input. The main purpose of supervised learning is to identify desired output for the newly coming input. Labeled spam detection can be an ideal example of supervised learning [5].
Unsupervised Learning: In unsupervised learning, the training dataset containing unlabeled data. These inputs do not have any desired labeled output. The most frequent method of unsupervised learning is clustering. Clustering can predict a pattern of the data. Clustering can be done in many ways such as hierarchical clustering or probabilistic clustering. Figure 2 illustrated a clustering, which indicates unsupervised learning.
Reinforcement Learning: In Reinforcement learning, an agent collaborates with its current circumstance through experimentation; in this way, it figures out how to pick the ideal demonstration in accomplishing its goal [13,23]. It is slightly different from supervised learning. The reward and penalize acts are enforced to the agents for training with evaluating the performance [23]. A perception based interaction is needed with environment to enforce reinforcement learning. A learning classifier system is ruled with reinforcement learning. A learning classifier split into few components. These components are condition action-rule, performance component, reinforcement component and discovery component [19]. Figure 3 shows the illustration of reinforcement learning structure.  [14].
Traditional machine learning algorithms are restricted to process internal natural data in raw structure. In the last few years, developing a pattern recognition or AI framework required structural engineering and extensive domain knowledge to reconstruct a feature extractor that converts the raw structure data, for example, the pixel estimations of an image into a feature vector, which can internally identify the patterns in the input. To overcome all this, dimensional deep learning has been invented [32]. Deep learning is a class of machine learning which contains several layers attain data pre-processing in pattern recognition [23]. A complex structure of neural network is associated with deep neural net consists of a neuron with originate valued activation function [3]. A basic architecture of a neural network consists an input i, bias b which is weighted by w, activation function φ. Figure 4 shows an basic architectural illustration of an ANN. Over the recent years, a significant variation has been taking place in convolutional neural networks [16]. There are extensive applications in image-based pattern recognition in deep learning. Computer vision, object detection, video surveillance, medical diagnosis are primary applications of deep learning [37].

Biological Inspiration Behind SNN
Biological neurons impart by producing and engendering electrical pulses called activity possibilities or spikes [17]. SNNs are all the more firmly identified with their biological components contrasted with past ANN, for example, multi-layer perceptrons [2]. SNN is inspired by synaptic interconnection between neurons with the time of spike firing. Which is similar to human brain topology. Electrical pulses are the most common source of transmitting information. Spike-an action potential generated from electrical pulse. The electrical pulse passing through the axon of the neuron. Starting of each spike, neurotransmitters are commuted by synapses along with synaptic cleft. The transient effect a spike has on the neuron's layer potential is by and large alluded to as the postsynaptic potential or PSP. PSP can either repress the future terminating callled inhibitory postsynaptic potential, IPSP or on the other hand energize the neuron, making it bound to fire an excitatory postsynaptic potential, EPSP. Figure 5 illustrates how a spike of neuron process [34].

Spiking Neural Network (SNN) Model
Biologically inspired spiking neural network has spiking neurons which are intimating synapses that are adaptable by scalar weights [4]. SNN models are capable of implementing in hardware [29]. The spiking neuron model utilized in this exploration depends on the Spike Response Model (SRM). In a neuron, input synapses are involving input spikes at time {t 1 · · · t n · · · t n }, which is illustrated in Figure 6. The neuron yields a spike when the inside neuron layer potential x j (t) crosses the limit potential ϑ lower than the firing time t j [31].
Limit potential ϑ considered as constant, After the ring of a spiking neuron, it does not initiate to any information spikes for a restricted time-frame which is known as the neuron obstinate time. Postsynaptic potential and input spike relationship can be drawn as: i epitomizes to the ith neurotransmitter, W i is the ith synaptic weight variable which can change the abundancy of the neuron potential x(t), t i is the ith input spike appearance time, and α(t) (Figure 7) is the spike response function shown as follows: τ epitomizes to the layer potential rot time constant.

Integrate-and-Fire Neurons
This is the most extensively used model, generally as spiking neural network. This model in relied on electronics principles. A spike goes down the axon and is changed by a low-pass channel, which changes over the short pulse into a current pulse I(t−t j (f )) that charges the coordinate and-fire circuit. A postsynaptic potential ε(t − t f i )can be raised when there is an increase in the resulting voltage. However, the neuron dispatches a pulse when the voltage becomes higher than the threshold value [7,33].
To depict the consequences for a membrane potential u after some time, with τ m being the layer time consistent in which voltage 'leaks' away. Similarly as with the spike-reaction model the neuron fires once u crosses limit and a short pulse δ is provoked.   [31].
The input current I for neuron I will regularly be 0, as approaching pulses to have a limited short length. When a spike shows up, it is increased by synaptic viability factor c ij shaping the postsynaptic potential that charges the capacitor. This model is computationally straightforward and can undoubtedly be applied to multidimensional hardware [7]. Figure 9 is the illustration of integrate-and-fire neuron.

Learning Rules in SNN
Spiking or non-spiking, learning is acknowledged by changing scalar-esteemed synaptic loads [4]. Spiking empowers a sort of bio-conceivable learning to decide that can not be duplicated in non-spiking networks.Neuroscientists have recognized numerous variations of this learning decide that falls under the umbrella term spike-timing-dependent plasticity (STDP). STDP mirrors biology where a neurotransmitter is reinforced when a presynaptic spike happens before a postsynaptic spike in close spans, this is called Long-Term Potentiation (LTP) [12]. The trivial learning methods for SNNs are unsupervised and supervised. All supervised learning utilizes labels. Most ordinarily, supervised learning changes weights through gradient descent on a cost contrasting function and aimed output from it.
This formula initializes the STDP rule for a single spiking neuron in large experimental data [4].

SNN Implementation
SNNs require equal synaptic sign transmission and weight update and actualizing them on CPUs that are innately sequential limits their speed [30]. SNNs can be implemented in VLSI systems [7]. Reproducing SNNs on von Neumann machines is normally not proficient, since asynchronous net action prompts semi-arbitrary access of synaptic loads in time and space complexity. Preferably, each spiking neuron is an own processor in an organization without a focal clock, which is the planned standard of neuromorphic stages. The energyproficiency of neuromorphic frameworks makes them an ideal contender for inserted gadgets subject to control requirements, e.g., cell phones, versatile and flying robots, and web of things (IoT) gadgets. Besides, neuromorphic gadgets could be used in server farms to diminish the expense of cloud applications depending on neural nets [35].

SNN Applications
SNNs have extensive applications in various fields of computing. There are several fields such as computer vision, medical diagnosis, cognitive science, neural computing etc. A 3D SNN architecture NeuCube is reviewed for EEG signal classification [26]. A STDP dependent object detection system has been utilized in [27]. Brain inspried-BCI system using spiking neural network has been developed in [28]. SNN models are implemented in hybrid systems which are more efficient in image processing, classification of materials using impact sound, classification of complex spike train pattern tasks than traditional deep learning models [31].

Conclusion
Spiking neural networks are inspired by structure of human brains. Instead of supervised learning in the second generation ANNs whose network yields are resolved normally by activation functions, for example, softmax, figuring out how to create different spikes at exact time moments is a harder issue. This review paper has represented a brief insight into the state of art in Spiking Neuron Networks: its natural motivation, the models that underlie the complex nets, some hypothetical outcomes on computational intricacy and learnability, learning rules, both customary and novel, and some current application scopes and  On the left side, the low-pass filter that transforms a spike to a current pulse I(t) that charges the capacitor. On the right, the schematic version of the soma, which generates a spike when voltage u over the capacitor crosses threshold [7].
results. This review paper also has represented an overview of the machine, deep, and reinforcement learning.