ANN-to-SNN conversion is an important approach for obtaining high-performance spiking neural networks (SNNs), yet current conversion methodologies predominantly focus on dense architectures, overlooking the structural sparsity that characterizes biological neural systems. In this work, we propose a sparse ANN-to-SNN conversion framework that inherits learned sparse topology into SNNs. We instantiate this framework with Cannistraci-Hebb Training (CHT), a brain-inspired dynamic sparse training method. Across CNN experiments on CIFAR-10, CIFAR-100, and DVSGesture, and ViT experiments on ImageNet-1K, SNNs produced by our framework achieve near-dense performance while outperforming SNNs obtained from static pruning and sparse-training baselines in accuracy. Compared with dense SNN counterparts, our framework also reduces energy consumption in all evaluated settings. Beyond these accuracy-efficiency gains, the resulting sparse SNNs are characterized by biologically plausible structural properties, including meta-depth and scale-free organization. These results suggest that learned sparse topology is a useful design dimension for ANN-to-SNN conversion, enabling high-accuracy sparse SNNs with substantially reduced energy consumption.