Guiding Unsupervised CBCT-to-CT synthesis using Content and style Representation by an Enhanced Perceptual synthesis (CREPs) loss
Résumé
The goal of this research was to propose an unsupervised learning technique for producing synthetic CT (sCT) images from CBCT data. For model training, a dataset consisting of 180 pairs of brain CT and CBCT scans, as well as 180 pairs of pelvis scans was used. The devised methodology incorporates a 2D conditional Generative Adversarial Network (cGAN) training under unsupervised conditions. To tackle challenges associated with unsupervised learning convergence, a novel ConvNext-based perceptual loss (CREPs loss) was developed to provide guidance in the CBCT-to-CT generation process.
Origine | Fichiers produits par l'(les) auteur(s) |
---|