Partial Volume Correction on 177 Lu-SPECT sinogram with Deep Learning trained on synthetic data
Résumé
This study introduces PVCNet, a deep learning-based method for Partial Volume Correction for 177 Lu-SPECT imaging, aimed at improving quantitative accuracy and image resolution prior to reconstruction. The method utilizes a large synthetic dataset derived from real patient images. PVCNet employs a dual neural network architecture, processing sinograms and projected attenuation maps as inputs. Its performance is evaluated using both real experimental phantom data and Monte Carlo simulations, and it is benchmarked against conventional Resolution Modeling (RM) and Iterative Yang (iY) methods, demonstrating promising results. On phantom data, PVCNet demonstrated superior performances (0.80/0.93/1.00 for Recovery Coefficients on 22/28/37 mm diameter spheres) than RM (0.52/0.66/0.77) but slightly lower than iY (0.96/0.99/1.06) which benefits from exact object segmentation, not required by PVCNet. On patient simulation, PVCNet showed the best results both in terms of Normalized Root Mean Squared Error (2.786/2.635/2.345 for RM/iY/PVCNet respectively) and mean Recovery Coefficients in background (1.05/0.95/1.03 for RM/iY/PVCNet respectively), kidneys (0.93/1.16/0.97 respectively) and lesions (0.66/1.14/0.91 respectively).
Origine | Fichiers produits par l'(les) auteur(s) |
---|