Submitted:
04 July 2025
Posted:
07 July 2025
You are already at the latest version
Abstract
Keywords:
1. Introduction
2. Background and Motivation
3. Fundamental Concepts of Spiking Neural Networks
3.1. Biological Inspiration and Neuron Models
- Leaky Integrate-and-Fire (LIF) Model: A simplified yet effective neuron model where the membrane potential integrates incoming spikes with leakage over time [29]. When the potential exceeds a threshold, the neuron fires a spike and resets its potential. Its mathematical simplicity has made it a staple in SNN research and hardware implementations.
- Hodgkin-Huxley Model: A biophysically detailed model that describes ion channel dynamics responsible for action potential generation [30]. Although highly accurate, its complexity limits its use primarily to neuroscientific simulations rather than large-scale SNNs.
- Izhikevich Model: Combines biological plausibility and computational efficiency by approximating various spiking and bursting patterns observed in real neurons [31]. It is widely used for capturing diverse neuronal dynamics with reduced computational cost.
3.2. Spike Encoding and Temporal Coding
- Rate Coding: Represents information by the firing rate (spikes per unit time) of neurons, analogous to the average activity level in traditional ANNs [33].
- Temporal Coding: Uses precise spike timing to convey information, such as the latency of the first spike or relative timing between spikes, enabling more compact and efficient representations.
- Population Coding: Distributes information across groups of neurons where the collective spiking pattern encodes stimulus attributes, enhancing robustness and expressiveness [34].
3.3. Synaptic Dynamics and Plasticity
- Spike-Timing Dependent Plasticity (STDP): A biologically inspired rule where synaptic strength is adjusted depending on the temporal order and interval between pre- and postsynaptic spikes, reinforcing causally linked spikes.
- Hebbian Learning: Synapses are strengthened when pre- and postsynaptic neurons fire together, embodying the principle that “neurons that fire together wire together.”
- Homeostatic Plasticity: Mechanisms that stabilize neural activity by adjusting synaptic strengths to maintain balanced firing rates.
3.4. Network Architectures
- Feedforward SNNs: Basic layered networks where spikes propagate forward, suitable for pattern recognition tasks.
- Recurrent SNNs: Incorporate feedback loops enabling temporal memory and dynamic behavior, crucial for sequence processing and temporal pattern recognition [40].
- Convolutional SNNs: Adapt convolutional operations to spike-based signals, enabling spatial feature extraction similar to CNNs in ANNs.
- Reservoir Computing and Liquid State Machines: Utilize randomly connected recurrent networks with readout layers trained separately, leveraging the dynamic states of the reservoir.
3.5. Summary
4. Training Algorithms for Spiking Neural Networks
4.1. Challenges in Training SNNs
4.2. Spike-Timing Dependent Plasticity (STDP)
4.3. Surrogate Gradient Methods
4.4. Conversion from Pretrained ANNs
4.5. Other Learning Paradigms
4.6. Summary and Outlook
5. Neuromorphic Hardware and Computational Efficiency
5.1. Principles of Neuromorphic Computing
- Event-driven Processing: Computation and communication happen only on spike events, reducing idle cycles and unnecessary power usage.
- Local Memory and Computation: Mimicking biological neurons, memory and processing units are colocated, alleviating the von Neumann bottleneck common in traditional architectures [64].
- Massive Parallelism: Large numbers of neurons and synapses operate simultaneously, enabling real-time processing of complex sensory data [65].
5.2. Notable Neuromorphic Platforms
- IBM TrueNorth: Featuring over one million spiking neurons and 256 million synapses, TrueNorth uses a manycore architecture with event-driven communication, achieving orders of magnitude reduction in power consumption compared to GPUs.
- Intel Loihi: A programmable neuromorphic chip supporting on-chip learning through spike-based plasticity rules, Loihi facilitates real-time adaptive systems and has been applied in robotics and sensory processing.
- SpiNNaker: Designed as a massively parallel digital computer mimicking neural architectures, SpiNNaker supports large-scale SNN simulations with high flexibility but uses traditional processors [66].
- BrainScaleS: An analog neuromorphic platform that exploits the physical properties of electronic circuits to emulate neural dynamics at accelerated timescales [67].
5.3. Energy Efficiency and Performance Comparisons
5.4. Challenges and Opportunities in Integrating with LLMs
- Hybrid Architectures: Combining spiking layers with conventional ANN components to leverage efficiency without sacrificing performance [71].
- Event-driven Transformers: Developing spike-based approximations of transformer architectures to exploit temporal sparsity.
- Algorithm-Hardware Co-design: Tailoring training algorithms and network architectures specifically for neuromorphic hardware capabilities [72].
5.5. Summary
6. Applications and Emerging Trends
6.1. Energy-Efficient and Real-Time Systems
- Neuromorphic Vision and Auditory Processing: SNNs excel at processing event-based sensory inputs such as those from Dynamic Vision Sensors (DVS) or silicon cochleas, enabling ultra-low-latency object detection, gesture recognition, and auditory scene analysis.
- Robotics and Control: Real-time adaptive control systems leverage SNNs for processing sensorimotor feedback with minimal energy, supporting navigation, manipulation, and interaction in dynamic environments [77].
6.2. Brain-Machine Interfaces and Neuroprosthetics
6.3. Hybrid Models and Cross-Paradigm Integration
- Spiking Transformers: Adaptations of transformer architectures employing spiking mechanisms to incorporate temporal coding and event-driven processing [81].
- Neuromorphic Preprocessing: Using SNNs as front-end feature extractors or encoders for subsequent processing by LLMs or deep neural networks, enabling efficient sensory data compression [82].
- Joint Training Frameworks: Developing algorithms that enable end-to-end training of mixed spiking and non-spiking layers for complex tasks [83].
6.4. Learning and Adaptation in Dynamic Environments
6.5. Theoretical and Neuroscientific Insights
6.6. Summary
7. Future Directions and Open Challenges
7.1. Scalable and Efficient Training Algorithms
7.2. Bridging the Gap with Large Language Models
7.3. Neuromorphic Hardware Co-Design
7.4. Standardization and Benchmarking
7.5. Interdisciplinary Collaboration
7.6. Ethical and Societal Implications
7.7. Summary
8. Conclusion
References
- Shrestha, S.B.; Orchard, G. SLAYER: Spike layer error reassignment in time. In Proceedings of the International Conference on Neural Information Processing Systems, 2018, pp. 1412–1421.
- Zhang, J.; Shen, J.; Wang, Z.; Guo, Q.; Yan, R.; Pan, G.; Tang, H. SpikingMiniLM: Energy-efficient Spiking Transformer for Natural Language Understanding. Science China Information Sciences 2024.
- Gerstner, W.; Kistler, W.M. Spiking neuron models: Single neurons, populations, plasticity; Cambridge University Press, 2002.
- Guo, Y.; Chen, Y.; Zhang, L.; Liu, X.; Wang, Y.; Huang, X.; Ma, Z. IM-loss: information maximization loss for spiking neural networks. Advances in Neural Information Processing Systems 2022, 35, 156–166.
- APT Advanced Processor Technologies Research Group. SpiNNaker.
- Sengupta, A.; Ye, Y.; Wang, R.; Liu, C.; Roy, K. Going deeper in spiking neural networks: VGG and residual architectures. Frontiers in Neuroscience 2019, 13, 95. [CrossRef]
- Zou, S.; Mu, Y.; Zuo, X.; Wang, S.; Cheng, L. Event-based human pose tracking by spiking spatiotemporal transformer. arXiv preprint arXiv:2303.09681 2023.
- Scellier, B.; Bengio, Y. Equilibrium propagation: Bridging the gap between energy-based models and backpropagation. Frontiers in Computational Neuroscience 2017, 11, 24. [CrossRef]
- Zhou, Z.; Zhu, Y.; He, C.; Wang, Y.; Yan, S.; Tian, Y.; Yuan, L. Spikformer: When spiking neural network meets transformer. In Proceedings of the International Conference on Learning Representations, 2023.
- Deng, S.; Li, Y.; Zhang, S.; Gu, S. Temporal Efficient Training of Spiking Neural Network via Gradient Re-weighting. In Proceedings of the International Conference on Learning Representations, 2022.
- de Vries, A. The growing energy footprint of artificial intelligence. Joule 2023, 7, 2191–2194.
- Krizhevsky, A.; Sutskever, I.; Hinton, G.E. Imagenet classification with deep convolutional neural networks. Advances in neural information processing systems 2012, 25, 1097–1105.
- Song, X.; Song, A.; Xiao, R.; Sun, Y. One-step Spiking Transformer with a Linear Complexity. In Proceedings of the International Joint Conference on Artificial Intelligence, 2024.
- Yao, M.; Hu, J.; Zhou, Z.; Yuan, L.; Tian, Y.; Xu, B.; Li, G. Spike-driven transformer. Advances in Neural Information Processing Systems 2024, 36, 64043–64058.
- Desai, K.; Johnson, J. Virtex: Learning visual representations from textual annotations. In Proceedings of the Proc. IEEE Conf. Comput. Vis. Pattern Recognit., 2021, pp. 11162–11173.
- Esser, S.K.; Merollaa, P.A.; Arthura, J.V.; Cassidya, A.S.; Appuswamya, R.; Andreopoulosa, A.; Berga, D.J.; McKinstrya, J.L.; Melanoa, T.; Barcha, D.R.; et al. Convolutional networks for fast energy-efficient neuromorphic computing. Proceedings of the National Academy of Sciences of the United States of America 2016, 113, 11441–11446. [CrossRef]
- Ioffe, S.; Szegedy, C. Batch normalization: Accelerating deep network training by reducing internal covariate shift. In Proceedings of the International Conference on Machine Learning, 2015, pp. 448–456.
- Zhu, X.; Zhao, B.; Ma, D.; Tang, H. An efficient learning algorithm for direct training deep spiking neural networks. IEEE Transactions on Cognitive and Developmental Systems 2022.
- Vigneron, A.; Martinet, J. A critical survey of STDP in Spiking Neural Networks for Pattern Recognition. In Proceedings of the International Joint Conference on Neural Networks, 2020, pp. 1–9.
- Simonyan, K.; Zisserman, A. Very deep convolutional networks for large-scale image recognition. In Proceedings of the International Conference on Learning Representations, 2015.
- Hu, Y.; Zheng, Q.; Jiang, X.; Pan, G. Fast-SNN: Fast Spiking Neural Network by Converting Quantized ANN. IEEE Transactions on Pattern Analysis and Machine Intelligence 2023, 45, 14546–14562. [CrossRef]
- Park, E.; Ahn, J.; Yoo, S. Weighted-entropy-based quantization for deep neural networks. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2017, pp. 5456–5464.
- Ambrogio, S.; Narayanan, P.; Tsai, H.; Shelby, R.M.; Boybat, I.; di Nolfo, C.; Sidler, S.; Giordano, M.; Bodini, M.; Farinha, N.C.; et al. Equivalent-accuracy accelerated neural-network training using analogue memory. Nature 2018, 558, 60–67.
- Berner, R.; Brandli, C.; Yang, M.; Liu, S.C.; Delbruck, T. A 240× 180 10mW 12us latency sparse-output vision sensor for mobile applications. In Proceedings of the Symposium on VLSI Circuits. IEEE, 2013, pp. C186–C187.
- Nair, V.; Hinton, G.E. Rectified linear units improve restricted boltzmann machines. In Proceedings of the International Conference on Machine Learning, 2010.
- Krizhevsky, A.; Hinton, G. Learning multiple layers of features from tiny images. Master’s thesis, Department of Computer Science, University of Toronto 2009.
- Bengio, Y.; Simard, P.; Frasconi, P. Learning long-term dependencies with gradient descent is difficult. IEEE Transactions on Neural Networks 1994, 5, 157–166.
- Yang, S.; Ma, H.; Yu, C.; Wang, A.; Li, E.P. SDiT: Spiking Diffusion Model with Transformer. arXiv preprint arXiv:2402.11588 2024.
- Lichtsteiner, P.; Posch, C.; Delbruck, T. A 128 × 128 120db 30mw asynchronous vision sensor that responds to relative intensity change. In Proceedings of the IEEE International Solid-State Circuits Conference. IEEE, 2006, pp. 2060–2069.
- Wang, H.; Liang, X.; Li, M.; Zhang, T. RTFormer: Re-parameter TSBN Spiking Transformer. arXiv preprint arXiv:2406.14180 2024.
- Eshraghian, J.K.; Ward, M.; Neftci, E.; Wang, X.; Lenz, G.; Dwivedi, G.; Bennamoun, M.; Jeong, D.S.; Lu, W.D. Training spiking neural networks using lessons from deep learning. arXiv preprint arXiv:2109.12894 2021.
- Huang, Z.; Shi, X.; Hao, Z.; Bu, T.; Ding, J.; Yu, Z.; Huang, T. Towards High-performance Spiking Transformers from ANN to SNN Conversion. In Proceedings of the ACM Multimedia, 2024.
- IBM Corporation. SyNAPSE.
- Shen, S.; Zhao, D.; Shen, G.; Zeng, Y. TIM: An Efficient Temporal Interaction Module for Spiking Transformer. arXiv preprint arXiv:2401.11687 2024.
- Yan, J.; Liu, Q.; Zhang, M.; Feng, L.; Ma, D.; Li, H.; Pan, G. Efficient spiking neural network design via neural architecture search. Neural Networks 2024, p. 106172. [CrossRef]
- Albanie, S. MCN-Models.
- Maass, W. Networks of spiking neurons: the third generation of neural network models. Neural Networks 1997, 10, 1659–1671. [CrossRef]
- Lichtsteiner, P.; Posch, C.; Delbruck, T. A 128 × 128 120 dB 15μ s Latency Asynchronous Temporal Contrast Vision Sensor. IEEE Journal of Solid-State Circuits 2008, 43, 566–576. [CrossRef]
- Borst, A.; Theunissen, F.E. Information theory and neural coding. Nature Neuroscience 1999, 2, 947–957. [CrossRef]
- Ning, Q.; Hesham, M.; Fabio, S.; Dora, S. A reconfigurable on-line learning spiking neuromorphic processor comprising 256 neurons and 128K synapses. Frontiers in Neuroscience 2015, 9, 141. [CrossRef]
- Intel Corporation. Intel Xeon Platinum 9282 Processor.
- Yao, M.; Hu, J.; Hu, T.; Xu, Y.; Zhou, Z.; Tian, Y.; Xu, B.; Li, G. Spike-driven Transformer V2: Meta Spiking Neural Network Architecture Inspiring the Design of Next-generation Neuromorphic Chips. In Proceedings of the International Conference on Learning Representation, 2024.
- Guo, Y.; Zhang, Y.; Chen, Y.; Peng, W.; Liu, X.; Zhang, L.; Huang, X.; Ma, Z. Membrane potential batch normalization for spiking neural networks. In Proceedings of the IEEE/CVF International Conference on Computer Vision, 2023, pp. 19420–19430.
- Russakovsky, O.; Deng, J.; Su, H.; Krause, J.; Satheesh, S.; Ma, S.; Huang, Z.; Karpathy, A.; Khosla, A.; Bernstein, M.; et al. ImageNet Large Scale Visual Recognition Challenge. International Journal of Computer Vision 2015, 115, 211–252. [CrossRef]
- Furber, S.B.; Galluppi, F.; Temple, S.; Plana, L.A. The SpiNNaker project. Proceedings of the IEEE 2014, 102, 652–665.
- Zhou, C.; Zhang, H.; Zhou, Z.; Yu, L.; Ma, Z.; Zhou, H.; Fan, X.; Tian, Y. Enhancing the Performance of Transformer-based Spiking Neural Networks by Improved Downsampling with Precise Gradient Backpropagation. arXiv preprint arXiv:2305.05954 2023.
- Nvidia. Nvidia V100 Tensor Core GPU.
- Xu, B.; Geng, H.; Yin, Y.; Li, P. DISTA: Denoising Spiking Transformer with intrinsic plasticity and spatiotemporal attention. arXiv preprint arXiv:2311.09376 2023.
- Eshraghian, J.K.; Ward, M.; Neftci, E.O.; Wang, X.; Lenz, G.; Dwivedi, G.; Bennamoun, M.; Jeong, D.S.; Lu, W.D. Training spiking neural networks using lessons from deep learning. Proceedings of the IEEE 2023, 111.
- Masquelier, T.; Thorpe, S.J. Unsupervised learning of visual features through spike timing dependent plasticity. PLoS Computational Biology 2007, 3, e31. [CrossRef]
- Arafa, Y.; ElWazir, A.; ElKanishy, A.; Aly, Y.; Elsayed, A.; Badawy, A.H.; Chennupati, G.; Eidenbenz, S.; Santhi, N. Verified instruction-level energy consumption measurement for NVIDIA GPUs. In Proceedings of the ACM International Conference on Computing Frontiers, 2020, pp. 60–70.
- Bu, T.; Fang, W.; Ding, J.; Dai, P.; Yu, Z.; Huang, T. Optimal ANN-SNN Conversion for High-accuracy and Ultra-low-latency Spiking Neural Networks. In Proceedings of the International Conference on Learning Representations, 2022.
- LeCun, Y.; Bottou, L.; Bengio, Y.; Haffner, P. Gradient-based learning applied to document recognition. Proceedings of the IEEE 1998, 86, 2278–2324.
- Cai, Z.; He, X.; Sun, J.; Vasconcelos, N. Deep learning with low precision by half-wave gaussian quantization. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2017, pp. 5918–5926.
- Chowdhury, S.S.; Rathi, N.; Roy, K. One timestep is all you need: Training spiking neural networks with ultra low latency. arXiv preprint arXiv:2110.05929 2021.
- Thorpe, S.; Fize, D.; Marlot, C. Speed of processing in the human visual system. Nature 1996, 381, 520–522.
- Brown, T.; Mann, B.; Ryder, N.; Subbiah, M.; Kaplan, J.D.; Dhariwal, P.; Neelakantan, A.; Shyam, P.; Sastry, G.; Askell, A.; et al. Language models are few-shot learners. Advances in neural information processing systems 2020, 33, 1877–1901.
- Schneider, M.L.; Donnelly, C.A.; Russek, S.E.; Baek, B.; Pufall, M.R.; Hopkins, P.F.; Dresselhaus, P.D.; Benz, S.P.; Rippard, W.H. Ultralow power artificial synapses using nanotextured magnetic Josephson junctions. Science Advances 2018, 4, e1701329. [CrossRef]
- Horowitz, M. Energy table for 45nm process. Stanford VLSI wiki 2014.
- Hunsberger, E.; Eliasmith, C. Spiking deep networks with LIF neurons. arXiv preprint arXiv:1510.08829 2015.
- Kim, H.; Park, J.; Lee, C.; Kim, J.J. Improving accuracy of binary neural networks using unbalanced activation distribution. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2021, pp. 7862–7871.
- Schaefer, A.T.; Margrie, T.W. Spatiotemporal representations in the olfactory system. Trends in Neurosciences 2007, 30, 92–100. [CrossRef]
- AMD. AMD Redeon VII GPU.
- iniLabs. DVS 128.
- Deng, S.; Gu, S. Optimal conversion of conventional artificial neural networks to spiking neural networks. Proceedigns of the International Conference on Learning Representations 2021.
- O’Connor, P.; Neil, D.; Liu, S.C.; Delbruck, T.; Pfeiffer, M. Real-time classification and sensor fusion with a spiking deep belief network. Frontiers in Neuroscience 2013, 7, 178.
- Indiveri, G.; Corradi, F.; Qiao, N. Neuromorphic architectures for spiking deep neural networks. In Proceedings of the IEEE International Electron Devices Meeting, 2015, pp. 4–2.
- Guo, Y.; Huang, X.; Ma, Z. Direct learning-based deep spiking neural networks: a review. Frontiers in Neuroscience 2023, 17, 1209795. [CrossRef]
- Collobert, R.; Weston, J.; Bottou, L.; Karlen, M.; Kavukcuoglu, K.; Kuksa, P. Natural language processing (almost) from scratch. Journal of Machine Learning Sesearch 2011, 12, 2493–2537.
- Van Rullen, R.; Thorpe, S.J. Rate coding versus temporal order coding: what the retinal ganglion cells tell the visual cortex. Neural Computation 2001, 13, 1255–1283. [CrossRef]
- Google. Cloud TPU.
- Meng, Q.; Xiao, M.; Yan, S.; Wang, Y.; Lin, Z.; Luo, Z.Q. Training High-Performance Low-Latency Spiking Neural Networks by Differentiation on Spike Representation. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2022, pp. 12444–12453.
- Xu, Z.; Lin, M.; Liu, J.; Chen, J.; Shao, L.; Gao, Y.; Tian, Y.; Ji, R. ReCU: Reviving the dead weights in binary neural networks. In Proceedings of the IEEE/CVF International Conference on Computer Vision, 2021, pp. 5198–5208.
- Lin, D.; Talathi, S.; Annapureddy, S. Fixed point quantization of deep convolutional networks. In Proceedings of the International Conference on Machine Learning, 2016, pp. 2849–2858.
- Li, Y.; Deng, S.; Dong, X.; Gong, R.; Gu, S. A free lunch from ANN: Towards efficient, accurate spiking neural networks calibration. In Proceedings of the International Conference on Machine Learning, 2021, pp. 6316–6325.
- Li, Y.; Dong, X.; Wang, W. Additive powers-of-two quantization: An efficient non-uniform discretization for neural networks. Proceedings of the International Conference on Learning Representations 2020.
- Li, T.; Liu, W.; Lv, C.; Xu, J.; Zhang, C.; Wu, M.; Zheng, X.; Huang, X. SpikeCLIP: A Contrastive Language-Image Pretrained Spiking Neural Network. arXiv preprint arXiv:2310.06488 2023.
- Ponulak, F.; Kasiński, A. Supervised learning in spiking neural networks with ReSuMe: sequence learning, classification, and spike shifting. Neural Computation 2010, 22, 467–510. [CrossRef]
- Fang, Y.; Wang, Z.; Zhang, L.; Cao, J.; Chen, H.; Xu, R. Spiking Wavelet Transformer. arXiv preprint arXiv:2403.11138 2024.
- Lian, S.; Shen, J.; Wang, Z.; Tang, H. IM-LIF: Improved Neuronal Dynamics With Attention Mechanism for Direct Training Deep Spiking Neural Network. IEEE Transactions on Emerging Topics in Computational Intelligence 2024.
- Hinton, G. The forward-forward algorithm: Some preliminary investigations. arXiv preprint arXiv:2212.13345 2022.
- Serrano-Gotarredona, T.; Linares-Barranco, B. A 128×128 1.5% Contrast Sensitivity 0.9% FPN 3 μs Latency 4 mW Asynchronous Frame-Free Dynamic Vision Sensor Using Transimpedance Preamplifiers. IEEE Journal of Solid-State Circuits 2013, 48, 827–838. [CrossRef]
- Hu, Y.; Tang, H.; Pan, G. Spiking Deep Residual Networks. IEEE Transactions on Neural Networks and Learning Systems 2021, 34, 5200–5205. [CrossRef]
- Bi, G.q.; Poo, M.m. Synaptic modifications in cultured hippocampal neurons: dependence on spike timing, synaptic strength, and postsynaptic cell type. Journal of Neuroscience 1998, 18, 10464–10472. [CrossRef]
- Bu, T.; Ding, J.; Yu, Z.; Huang, T. Optimized potential initialization for low-latency spiking neural networks. In Proceedings of the AAAI Conference on Artificial Intelligence, 2022, Vol. 36, pp. 11–20.
- Wang, Q.; Zhang, D.; Zhang, T.; Xu, B. Attention-free Spikformer: Mixing Spike Sequences with Simple Linear Transforms. arXiv preprint arXiv:2308.02557 2023.
- Zniyed, Y.; Nguyen, T.P.; et al. Enhanced network compression through tensor decompositions and pruning. IEEE Transactions on Neural Networks and Learning Systems 2024.
- Zhu, R.J.; Zhao, Q.; Zhang, T.; Deng, H.; Duan, Y.; Zhang, M.; Deng, L.J. TCJA-SNN: Temporal-channel joint attention for spiking neural networks. arXiv preprint arXiv:2206.10177 2022.
- Ding, R.; Chin, T.W.; Liu, Z.; Marculescu, D. Regularizing activation distribution for training binarized deep networks. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2019, pp. 11408–11417.
- Xing, X.; Gao, B.; Zhang, Z.; Clifton, D.A.; Xiao, S.; Du, L.; Li, G.; Zhang, J. SpikeLLM: Scaling up Spiking Neural Network to Large Language Models via Saliency-based Spiking. arXiv preprint arXiv:2407.04752 2024.
- Wikipedia contributors. Tensor processing unit — Wikipedia, The Free Encyclopedia. https://en.wikipedia.org/w/index.php?title=Tensor_processing_unit&oldid=958810965, 2020. [Online; accessed 1-June-2020].
- Liu, Z.; Shen, Z.; Savvides, M.; Cheng, K.T. Reactnet: Towards precise binary neural network with generalized activation functions. In Proceedings of the European Conference on Computer Vision. Springer, 2020, pp. 143–159.
- Wang, X.; Wu, Z.; Rong, Y.; Zhu, L.; Jiang, B.; Tang, J.; Tian, Y. SSTFormer: bridging spiking neural network and memory support transformer for frame-event based recognition. arXiv preprint arXiv:2308.04369 2023.
- O’Connor, P.; Gavves, E.; Welling, M. Training a spiking neural network with equilibrium propagation. In Proceedings of the International Conference on Artificial Intelligence and Statistics, 2019, pp. 1516–1523.
- VanRullen, R.; Thorpe, S.J. Surfing a spike wave down the ventral stream. Vision Research 2002, 42, 2593–2615. [CrossRef]
- Liu, Z.; Wu, B.; Luo, W.; Yang, X.; Liu, W.; Cheng, K.T. Bi-real net: Enhancing the performance of 1-bit cnns with improved representational capability and advanced training algorithm. In Proceedings of the European Conference on Computer Vision, 2018, pp. 722–737.
- Gütig, R.; Sompolinsky, H. The tempotron: a neuron that learns spike timing–based decisions. Nature neuroscience 2006, 9, 420–428. [CrossRef]
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content. |
© 2025 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (http://creativecommons.org/licenses/by/4.0/).
