Preprint
Article

This version is not peer-reviewed.

Shared Representation Learning for Joint CT Reconstruction and Anatomical Segmentation

Submitted:

10 February 2026

Posted:

11 February 2026

You are already at the latest version

Abstract
Denoising-based CT reconstruction methods can suppress high-frequency textures that are relevant for subtle lesion visibility. Motivated by hybrid convolution–attention designs such as CTLformer, this paper proposes a frequency-constrained denoising framework that preserves diagnostically relevant textures while reducing noise. The method introduces a dual-domain loss combining spatial fidelity with frequency-band constraints computed using discrete cosine transform representations. Evaluations on 52,000 paired slices from two low-dose CT datasets show that, relative to CNN-only and attention-only baselines, the proposed approach increases PSNR by 0.7–1.1 dB while maintaining higher high-frequency energy consistency. Reader-oriented texture metrics also improve by 8%–14% in regions with fine structural patterns.
Keywords: 
;  ;  ;  ;  
Copyright: This open access article is published under a Creative Commons CC BY 4.0 license, which permit the free download, distribution, and reuse, provided that the author and preprint are cited in any reuse.
Prerpints.org logo

Preprints.org is a free preprint server supported by MDPI in Basel, Switzerland.

Subscribe

Disclaimer

Terms of Use

Privacy Policy

Privacy Settings

© 2026 MDPI (Basel, Switzerland) unless otherwise stated