Preprint
Article

This version is not peer-reviewed.

Constructing Brain-Inspired Sparse Topologies for Energy-Efficient ANN-to-SNN Conversion via Cannistraci-Hebb Training

Submitted:

09 May 2026

Posted:

11 May 2026

You are already at the latest version

Abstract
ANN-to-SNN conversion is an important approach for obtaining high-performance spiking neural networks (SNNs), yet current conversion methodologies predominantly focus on dense architectures, overlooking the structural sparsity that characterizes biological neural systems. In this work, we propose a sparse ANN-to-SNN conversion framework that inherits learned sparse topology into SNNs. We instantiate this framework with Cannistraci-Hebb Training (CHT), a brain-inspired dynamic sparse training method. Across CNN experiments on CIFAR-10, CIFAR-100, and DVSGesture, and ViT experiments on ImageNet-1K, SNNs produced by our framework achieve near-dense performance while outperforming SNNs obtained from static pruning and sparse-training baselines in accuracy. Compared with dense SNN counterparts, our framework also reduces energy consumption in all evaluated settings. Beyond these accuracy-efficiency gains, the resulting sparse SNNs are characterized by biologically plausible structural properties, including meta-depth and scale-free organization. These results suggest that learned sparse topology is a useful design dimension for ANN-to-SNN conversion, enabling high-accuracy sparse SNNs with substantially reduced energy consumption.
Keywords: 
;  ;  ;  ;  ;  
Copyright: This open access article is published under a Creative Commons CC BY 4.0 license, which permit the free download, distribution, and reuse, provided that the author and preprint are cited in any reuse.
Prerpints.org logo

Preprints.org is a free preprint server supported by MDPI in Basel, Switzerland.

Subscribe

Disclaimer

Terms of Use

Privacy Policy

Privacy Settings

© 2026 MDPI (Basel, Switzerland) unless otherwise stated