Preprint Article Version 1 Preserved in Portico This version is not peer-reviewed

Contrast-enhanced Liver MR Synthesis using Gradient Regularized Multi-Modal Multi-Discrimination Sparse Attention Fusion GAN

Version 1 : Received: 9 May 2023 / Approved: 10 May 2023 / Online: 10 May 2023 (08:10:11 CEST)

A peer-reviewed article of this Preprint also exists.

Jiao, C.; Ling, D.; Bian, S.; Vassantachart, A.; Cheng, K.; Mehta, S.; Lock, D.; Zhu, Z.; Feng, M.; Thomas, H.; Scholey, J.E.; Sheng, K.; Fan, Z.; Yang, W. Contrast-Enhanced Liver Magnetic Resonance Image Synthesis Using Gradient Regularized Multi-Modal Multi-Discrimination Sparse Attention Fusion GAN. Cancers 2023, 15, 3544. Jiao, C.; Ling, D.; Bian, S.; Vassantachart, A.; Cheng, K.; Mehta, S.; Lock, D.; Zhu, Z.; Feng, M.; Thomas, H.; Scholey, J.E.; Sheng, K.; Fan, Z.; Yang, W. Contrast-Enhanced Liver Magnetic Resonance Image Synthesis Using Gradient Regularized Multi-Modal Multi-Discrimination Sparse Attention Fusion GAN. Cancers 2023, 15, 3544.

Abstract

Purposes: To provide abdominal contrast-enhanced MR image synthesis, we developed an image gradient regularized multi-modal multi-discrimination sparse-attention fusion generative adversarial network (GRMM-GAN) to avoid repeated contrast injections to patients and facilitate adaptive monitoring. Methods: With IRB approval, 165 abdominal MR studies from 61 liver cancer patients were retrospectively solicited from our institutional database. Each study included T2, T1 pre-contrast (T1pre), and T1 contrast-enhanced (T1ce) images. The GRMM-GAN synthesis pipeline consists of a sparse attention fusion network, an image gradient regularizer (GR), and a generative adversarial network with multi-discrimination. The studies were randomly divided into 115 for training, 20 for validation, and 30 for testing. The two pre-contrast MR modalities, T2 and T1pre images, were adopted as inputs in the training phase. The T1ce image at the portal venous phase was used as an output. The synthesized T1ce images were compared with the ground truth T1ce images. The evaluation metrics include peak signal-to-noise ratio (PSNR), structural-similarity-index (SSIM), and mean-squared-error (MSE). A Turing test and experts’ contours evaluated the image synthesis quality. Results: The proposed GRMM-GAN model achieved a PSNR of 28.56, an SSIM of 0.869, and an MSE of 83.27. The proposed model showed statistically significant improvements in all metrics tested with p-values <0.05 over the state-of-the-art model comparisons. The average Turing test score was 52.33%, close to random guessing, supporting the model's effectiveness for clinical application. In the tumor-specific region analysis, the average tumor contrast-to-noise ratio (CNR) of the synthesized MR images was not statistically significant from the real MR images. The average DICE from real vs. synthetic images was 0.90, compared to the inter-operator DICE of 0.91. Conclusion: We demonstrated the function of a novel multi-modal MR image synthesis neural network, GRMM-GAN, for T1ce MR synthesis based on pre-contrast T1 and T2 MR images. GRMM-GAN shows promise for avoiding repeated contrast injections during radiation therapy treatment.

Keywords

MR synthesis, GAN, multi-modal fusion, tumor monitoring, contrast enhancement

Subject

Medicine and Pharmacology, Oncology and Oncogenics

Comments (0)

We encourage comments and feedback from a broad range of readers. See criteria for comments and our Diversity statement.

Leave a public comment
Send a private comment to the author(s)
* All users must log in before leaving a comment
Views 0
Downloads 0
Comments 0
Metrics 0


×
Alerts
Notify me about updates to this article or when a peer-reviewed version is published.
We use cookies on our website to ensure you get the best experience.
Read more about our cookies here.