Preprint
Article

This version is not peer-reviewed.

Constructing Brain-Inspired Sparse Topologies for Energy-Efficient ANN-to-SNN Conversion via Cannistraci-Hebb Training

  † Equal contribution.

Submitted:

13 February 2026

Posted:

14 February 2026

You are already at the latest version

Abstract
While ANN-to-SNN conversion is a pivotal approach to obtain SNNs, current methods mostly focus on dense architectures, disregarding the structural sparsity fundamental to brain neural networks. To bridge this gap, we propose a novel framework that integrates Cannistraci-Hebb Training (CHT)—a brain-inspired Dynamic Sparse Training algorithm—to instill biologically plausible topologies into SNNs. Through our framework, the converted SNNs directly inherit emergent brain-like properties, such as meta-depth and small-worldness, from the sparse ANNs. We confirm the brain-like topology trained by CHT and then investigate our framework across different conversion approaches. Our approach achieves comparable or superior accuracy to dense counterparts on both convolutional neural networks (CNNs) and Vision Transformer (ViT), while reducing theoretical energy consumption by over 60%. Empirically, we validate the framework's superiority over pruning baselines and direct SNN sparse training in terms of the accuracy-energy trade-off.
Keywords: 
;  ;  ;  ;  ;  
Copyright: This open access article is published under a Creative Commons CC BY 4.0 license, which permit the free download, distribution, and reuse, provided that the author and preprint are cited in any reuse.
Prerpints.org logo

Preprints.org is a free preprint server supported by MDPI in Basel, Switzerland.

Subscribe

Disclaimer

Terms of Use

Privacy Policy

Privacy Settings

© 2026 MDPI (Basel, Switzerland) unless otherwise stated