Publications by authors named "Tonghe Wang"

64 Publications

Male pelvic CT multi-organ segmentation using synthetic MRI-aided dual pyramid networks.

Phys Med Biol 2021 Apr 16;66(8). Epub 2021 Apr 16.

Department of Radiation Oncology and Winship Cancer Institute, Emory University, Atlanta, GA 30322, United States of America.

The delineation of the prostate and organs-at-risk (OARs) is fundamental to prostate radiation treatment planning, but is currently labor-intensive and observer-dependent. We aimed to develop an automated computed tomography (CT)-based multi-organ (bladder, prostate, rectum, left and right femoral heads (RFHs)) segmentation method for prostate radiation therapy treatment planning. The proposed method uses synthetic MRIs (sMRIs) to offer superior soft-tissue information for male pelvic CT images. Cycle-consistent adversarial networks (CycleGAN) were used to generate CT-based sMRIs. Dual pyramid networks (DPNs) extracted features from both CTs and sMRIs. A deep attention strategy was integrated into the DPNs to select the most relevant features from both CTs and sMRIs to identify organ boundaries. The CT-based sMRI generated from our previously trained CycleGAN and its corresponding CT images were inputted to the proposed DPNs to provide complementary information for pelvic multi-organ segmentation. The proposed method was trained and evaluated using datasets from 140 patients with prostate cancer, and were then compared against state-of-art methods. The Dice similarity coefficients and mean surface distances between our results and ground truth were 0.95 ± 0.05, 1.16 ± 0.70 mm; 0.88 ± 0.08, 1.64 ± 1.26 mm; 0.90 ± 0.04, 1.27 ± 0.48 mm; 0.95 ± 0.04, 1.08 ± 1.29 mm; and 0.95 ± 0.04, 1.11 ± 1.49 mm for bladder, prostate, rectum, left and RFHs, respectively. Mean center of mass distances was within 3 mm for all organs. Our results performed significantly better than those of competing methods in most evaluation metrics. We demonstrated the feasibility of sMRI-aided DPNs for multi-organ segmentation on pelvic CT images, and its superiority over other networks. The proposed method could be used in routine prostate cancer radiotherapy treatment planning to rapidly segment the prostate and standard OARs.
View Article and Find Full Text PDF

Download full-text PDF

Source
http://dx.doi.org/10.1088/1361-6560/abf2f9DOI Listing
April 2021

Learning-Based Stopping Power Mapping on Dual-Energy CT for Proton Radiation Therapy.

Int J Part Ther 2021 12;7(3):46-60. Epub 2021 Feb 12.

Department of Radiation Oncology and Winship Cancer Institute, Emory University, Atlanta, GA, USA.

Purpose: Dual-energy computed tomography (DECT) has been used to derive relative stopping power (RSP) maps by obtaining the energy dependence of photon interactions. The DECT-derived RSP maps could potentially be compromised by image noise levels and the severity of artifacts when using physics-based mapping techniques. This work presents a noise-robust learning-based method to predict RSP maps from DECT for proton radiation therapy.

Materials And Methods: The proposed method uses a residual attention cycle-consistent generative adversarial network to bring DECT-to-RSP mapping close to a 1-to-1 mapping by introducing an inverse RSP-to-DECT mapping. To evaluate the proposed method, we retrospectively investigated 20 head-and-neck cancer patients with DECT scans in proton radiation therapy simulation. Ground truth RSP values were assigned by calculation based on chemical compositions and acted as learning targets in the training process for DECT datasets; they were evaluated against results from the proposed method using a leave-one-out cross-validation strategy.

Results: The predicted RSP maps showed an average normalized mean square error of 2.83% across the whole body volume and an average mean error less than 3% in all volumes of interest. With additional simulated noise added in DECT datasets, the proposed method still maintained a comparable performance, while the physics-based stoichiometric method suffered degraded inaccuracy from increased noise level. The average differences from ground truth in dose volume histogram metrics for clinical target volumes were less than 0.2 Gy for D and D with no statistical significance. Maximum difference in dose volume histogram metrics of organs at risk was around 1 Gy on average.

Conclusion: These results strongly indicate the high accuracy of RSP maps predicted by our machine-learning-based method and show its potential feasibility for proton treatment planning and dose calculation.
View Article and Find Full Text PDF

Download full-text PDF

Source
http://dx.doi.org/10.14338/IJPT-D-20-00020.1DOI Listing
http://www.ncbi.nlm.nih.gov/pmc/articles/PMC7886267PMC
February 2021

Synthetic dual-energy CT for MRI-only based proton therapy treatment planning using label-GAN.

Phys Med Biol 2021 Mar 9;66(6):065014. Epub 2021 Mar 9.

Department of Radiation Oncology and Winship Cancer Institute, Emory University, Atlanta, GA 30322, United States of America.

MRI-only treatment planning is highly desirable in the current proton radiation therapy workflow due to its appealing advantages such as bypassing MR-CT co-registration, avoiding x-ray CT exposure dose and reduced medical cost. However, MRI alone cannot provide stopping power ratio (SPR) information for dose calculations. Given that dual energy CT (DECT) can estimate SPR with higher accuracy than conventional single energy CT, we propose a deep learning-based method in this study to generate synthetic DECT (sDECT) from MRI to calculate SPR. Since the contrast difference between high-energy and low-energy CT (LECT) is important, and in order to accurately model this difference, we propose a novel label generative adversarial network-based model which can not only discriminate the realism of sDECT but also differentiate high-energy CT (HECT) and LECT from DECT. A cohort of 57 head-and-neck cancer patients with DECT and MRI pairs were used to validate the performance of the proposed framework. The results of sDECT and its derived SPR maps were compared with clinical DECT and the corresponding SPR, respectively. The mean absolute error for synthetic LECT and HECT were 79.98 ± 18.11 HU and 80.15 ± 16.27 HU, respectively. The corresponding SPR maps generated from sDECT showed a normalized mean absolute error as 5.22% ± 1.23%. By comparing with the traditional Cycle GANs, our proposed method significantly improves the accuracy of sDECT. The results indicate that on our dataset, the sDECT image form MRI is close to planning DECT, and thus shows promising potential for generating SPR maps for proton therapy.
View Article and Find Full Text PDF

Download full-text PDF

Source
http://dx.doi.org/10.1088/1361-6560/abe736DOI Listing
March 2021

Thyroid gland delineation in noncontrast-enhanced CTs using deep convolutional neural networks.

Phys Med Biol 2021 Feb 16;66(5):055007. Epub 2021 Feb 16.

Department of Radiation Oncology and Winship Cancer Institute, Emory University, Atlanta, GA 30322, United States of America.

The purpose of this study is to develop a deep learning method for thyroid delineation with high accuracy, efficiency, and robustness in noncontrast-enhanced head and neck CTs. The cross-sectional analysis consisted of six tests, including randomized cross-validation and hold-out experiments, tests of prediction accuracy between cancer and benign and cross-gender analysis were performed to evaluate the proposed deep-learning-based performance method. CT images of 1977 patients with suspected thyroid carcinoma were retrospectively investigated. The automatically segmented thyroid gland volume was compared against physician-approved clinical contours using metrics, the Pearson correlation and Bland-Altman analysis. Quantitative metrics included: the Dice similarity coefficient (DSC), sensitivity, specificity, Jaccard index (JAC), Hausdorff distance (HD), mean surface distance (MSD), residual mean square distance (RMSD) and the center of mass distance (CMD). The robustness of the proposed method was further tested using the nonparametric Kruskal-Wallis test to assess the equality of distribution of DSC values. The proposed method's accuracy remained high through all the tests, with the median DSC, JAC, sensitivity and specificity higher than 0.913, 0.839, 0.856 and 0.979, respectively. The proposed method also resulted in median MSD, RMSD, HD and CMD, of less than 0.31 mm, 0.48 mm, 2.06 mm and 0.50 mm, respectively. The MSD and RMSD were 0.40 ± 0.29 mm and 0.70 ± 0.46 mm, respectively. Concurrent testing of the proposed method with 3D U-Net and V-Net showed that the proposed method had significantly improved performance. The proposed deep-learning method achieved accurate and robust performance through six cross-sectional analysis tests.
View Article and Find Full Text PDF

Download full-text PDF

Source
http://dx.doi.org/10.1088/1361-6560/abc5a6DOI Listing
February 2021

Head-and-neck organs-at-risk auto-delineation using dual pyramid networks for CBCT-guided adaptive radiotherapy.

Phys Med Biol 2021 Feb 11;66(4):045021. Epub 2021 Feb 11.

Department of Radiation Oncology, Emory University, Atlanta, GA, United States of America.

Organ-at-risk (OAR) delineation is a key step for cone-beam CT (CBCT) based adaptive radiotherapy planning that can be a time-consuming, labor-intensive, and subject-to-variability process. We aim to develop a fully automated approach aided by synthetic MRI for rapid and accurate CBCT multi-organ contouring in head-and-neck (HN) cancer patients. MRI has superb soft-tissue contrasts, while CBCT offers bony-structure contrasts. Using the complementary information provided by MRI and CBCT is expected to enable accurate multi-organ segmentation in HN cancer patients. In our proposed method, MR images are firstly synthesized using a pre-trained cycle-consistent generative adversarial network given CBCT. The features of CBCT and synthetic MRI (sMRI) are then extracted using dual pyramid networks for final delineation of organs. CBCT images and their corresponding manual contours were used as pairs to train and test the proposed model. Quantitative metrics including Dice similarity coefficient (DSC), Hausdorff distance 95% (HD95), mean surface distance, and residual mean square distance (RMS) were used to evaluate the proposed method. The proposed method was evaluated on a cohort of 65 HN cancer patients. CBCT images were collected from those patients who received proton therapy. Overall, DSC values of 0.87 ± 0.03, 0.79 ± 0.10/0.79 ± 0.11, 0.89 ± 0.08/0.89 ± 0.07, 0.90 ± 0.08, 0.75 ± 0.06/0.77 ± 0.06, 0.86 ± 0.13, 0.66 ± 0.14, 0.78 ± 0.05/0.77 ± 0.04, 0.96 ± 0.04, 0.89 ± 0.04/0.89 ± 0.04, 0.83 ± 0.02, and 0.84 ± 0.07 for commonly used OARs for treatment planning including brain stem, left/right cochlea, left/right eye, larynx, left/right lens, mandible, optic chiasm, left/right optic nerve, oral cavity, left/right parotid, pharynx, and spinal cord, respectively, were achieved. This study provides a rapid and accurate OAR auto-delineation approach, which can be used for adaptive radiation therapy.
View Article and Find Full Text PDF

Download full-text PDF

Source
http://dx.doi.org/10.1088/1361-6560/abd953DOI Listing
February 2021

A review on medical imaging synthesis using deep learning and its clinical applications.

J Appl Clin Med Phys 2021 Jan 11;22(1):11-36. Epub 2020 Dec 11.

Department of Radiation Oncology, Emory University, Atlanta, GA, USA.

This paper reviewed the deep learning-based studies for medical imaging synthesis and its clinical application. Specifically, we summarized the recent developments of deep learning-based methods in inter- and intra-modality image synthesis by listing and highlighting the proposed methods, study designs, and reported performances with related clinical applications on representative studies. The challenges among the reviewed studies were then summarized with discussion.
View Article and Find Full Text PDF

Download full-text PDF

Source
http://dx.doi.org/10.1002/acm2.13121DOI Listing
http://www.ncbi.nlm.nih.gov/pmc/articles/PMC7856512PMC
January 2021

Intensity non-uniformity correction in MR imaging using residual cycle generative adversarial network.

Phys Med Biol 2020 11 27;65(21):215025. Epub 2020 Nov 27.

Department of Radiation Oncology and Winship Cancer Institute, Emory University, Atlanta, GA, 30322, United States of America.

Correcting or reducing the effects of voxel intensity non-uniformity (INU) within a given tissue type is a crucial issue for quantitative magnetic resonance (MR) image analysis in daily clinical practice. Although having no severe impact on visual diagnosis, the INU can highly degrade the performance of automatic quantitative analysis such as segmentation, registration, feature extraction and radiomics. In this study, we present an advanced deep learning based INU correction algorithm called residual cycle generative adversarial network (res-cycle GAN), which integrates the residual block concept into a cycle-consistent GAN (cycle-GAN). In cycle-GAN, an inverse transformation was implemented between the INU uncorrected and corrected magnetic resonance imaging (MRI) images to constrain the model through forcing the calculation of both an INU corrected MRI and a synthetic corrected MRI. A fully convolution neural network integrating residual blocks was applied in the generator of cycle-GAN to enhance end-to-end raw MRI to INU corrected MRI transformation. A cohort of 55 abdominal patients with T1-weighted MR INU images and their corrections with a clinically established and commonly used method, namely, N4ITK were used as a pair to evaluate the proposed res-cycle GAN based INU correction algorithm. Quantitatively comparisons of normalized mean absolute error (NMAE), peak signal-to-noise ratio (PSNR), normalized cross-correlation (NCC) indices, and spatial non-uniformity (SNU) were made among the proposed method and other approaches. Our res-cycle GAN based method achieved an NMAE of 0.011 ± 0.002, a PSNR of 28.0 ± 1.9 dB, an NCC of 0.970 ± 0.017, and a SNU of 0.298 ± 0.085. Our proposed method has significant improvements (p < 0.05) in NMAE, PSNR, NCC and SNU over other algorithms including conventional GAN and U-net. Once the model is well trained, our approach can automatically generate the corrected MR images in a few minutes, eliminating the need for manual setting of parameters.
View Article and Find Full Text PDF

Download full-text PDF

Source
http://dx.doi.org/10.1088/1361-6560/abb31fDOI Listing
http://www.ncbi.nlm.nih.gov/pmc/articles/PMC7934018PMC
November 2020

Automatic quantification of myocardium and pericardial fat from coronary computed tomography angiography: a multicenter study.

Eur Radiol 2020 Nov 18. Epub 2020 Nov 18.

Department of Radiation Oncology and Winship Cancer Institute, Emory University, Atlanta, GA, 30322, USA.

Objectives: To develop a deep learning-based method for simultaneous myocardium and pericardial fat quantification from coronary computed tomography angiography (CCTA) for the diagnosis and treatment of cardiovascular disease (CVD).

Methods: We retrospectively identified CCTA data obtained between May 2008 and July 2018 in a multicenter (six centers) CVD study. The proposed method was evaluated on 422 patients' data by two studies. The first overall study involves training model on CVD patients and testing on non-CVD patients, as well as training on non-CVD patients and testing on CVD patients. The second study was performed using the leave-center-out approach. The method performance was evaluated using Dice similarity coefficient (DSC), Jaccard index (JAC), 95% Hausdorff distance (HD95), mean surface distance (MSD), residual mean square distance (RMSD), and the center of mass distance (CMD). The robustness of the proposed method was tested using the nonparametric Kruskal-Wallis test and post hoc test to assess the equality of distribution of DSC values among different tests.

Results: The automatic segmentation achieved a strong correlation with contour (ICC and R > 0.97, p value < 0.001 throughout all tests). The accuracy of the proposed method remained high through all the tests, with the median DSC higher than 0.88 for pericardial fat and 0.96 for myocardium. The proposed method also resulted in mean MSD, RMSD, HD95, and CMD of less than 1.36 mm for pericardial fat and 1.00 mm for myocardium.

Conclusions: The proposed deep learning-based segmentation method enables accurate simultaneous quantification of myocardium and pericardial fat in a multicenter study.

Key Points: • Deep learning-based myocardium and pericardial fat segmentation method tested on 422 patients' coronary computed tomography angiography in a multicenter study. • The proposed method provides segmentations with high volumetric accuracy (ICC and R > 0.97, p value < 0.001) and similar shape as manual annotation by experienced radiologists (median Dice similarity coefficient ≥ 0.88 for pericardial fat and 0.96 for myocardium).
View Article and Find Full Text PDF

Download full-text PDF

Source
http://dx.doi.org/10.1007/s00330-020-07482-5DOI Listing
November 2020

Deformable MR-CBCT prostate registration using biomechanically constrained deep learning networks.

Med Phys 2021 Jan 27;48(1):253-263. Epub 2020 Nov 27.

Department of Radiation Oncology, Emory University, Atlanta, GA, USA.

Background And Purpose: Radiotherapeutic dose escalation to dominant intraprostatic lesions (DIL) in prostate cancer could potentially improve tumor control. The purpose of this study was to develop a method to accurately register multiparametric magnetic resonance imaging (MRI) with CBCT images for improved DIL delineation, treatment planning, and dose monitoring in prostate radiotherapy.

Methods And Materials: We proposed a novel registration framework which considers biomechanical constraint when deforming the MR to CBCT. The registration framework consists of two segmentation convolutional neural networks (CNN) for MR and CBCT prostate segmentation, and a three-dimensional (3D) point cloud (PC) matching network. Image intensity-based rigid registration was first performed to initialize the alignment between MR and CBCT prostate. The aligned prostates were then meshed into tetrahedron elements to generate volumetric PC representation of the prostate shapes. The 3D PC matching network was developed to predict a PC motion vector field which can deform the MRI prostate PC to match the CBCT prostate PC. To regularize the network's motion prediction with biomechanical constraints, finite element (FE) modeling-generated motion fields were used to train the network. MRI and CBCT images of 50 patients with intraprostatic fiducial markers were used in this study. Registration results were evaluated using three metrics including dice similarity coefficient (DSC), mean surface distance (MSD), and target registration error (TRE). In addition to spatial registration accuracy, Jacobian determinant and strain tensors were calculated to assess the physical fidelity of the deformation field.

Results: The mean and standard deviation of our method were 0.93 ± 0.01, 1.66 ± 0.10 mm, and 2.68 ± 1.91 mm for DSC, MSD, and TRE, respectively. The mean TRE of the proposed method was reduced by 29.1%, 14.3%, and 11.6% as compared to image intensity-based rigid registration, coherent point drifting (CPD) nonrigid surface registration, and modality-independent neighborhood descriptor (MIND) registration, respectively.

Conclusion: We developed a new framework to accurately register the prostate on MRI to CBCT images for external beam radiotherapy. The proposed method could be used to aid DIL delineation on CBCT, treatment planning, dose escalation to DIL, and dose monitoring.
View Article and Find Full Text PDF

Download full-text PDF

Source
http://dx.doi.org/10.1002/mp.14584DOI Listing
http://www.ncbi.nlm.nih.gov/pmc/articles/PMC7903879PMC
January 2021

Biomechanically constrained non-rigid MR-TRUS prostate registration using deep learning based 3D point cloud matching.

Med Image Anal 2021 01 7;67:101845. Epub 2020 Oct 7.

Department of Radiation Oncology, Emory University, 1365 Clifton Road NE, Atlanta, GA 30322, United States; Winship Cancer Institute, Emory University, Atlanta, GA 30322, United States. Electronic address:

A non-rigid MR-TRUS image registration framework is proposed for prostate interventions. The registration framework consists of a convolutional neural networks (CNN) for MR prostate segmentation, a CNN for TRUS prostate segmentation and a point-cloud based network for rapid 3D point cloud matching. Volumetric prostate point clouds were generated from the segmented prostate masks using tetrahedron meshing. The point cloud matching network was trained using deformation field that was generated by finite element analysis. Therefore, the network implicitly models the underlying biomechanical constraint when performing point cloud matching. A total of 50 patients' datasets were used for the network training and testing. Alignment of prostate shapes after registration was evaluated using three metrics including Dice similarity coefficient (DSC), mean surface distance (MSD) and Hausdorff distance (HD). Internal point-to-point registration accuracy was assessed using target registration error (TRE). Jacobian determinant and strain tensors of the predicted deformation field were calculated to analyze the physical fidelity of the deformation field. On average, the mean and standard deviation were 0.94±0.02, 0.90±0.23 mm, 2.96±1.00 mm and 1.57±0.77 mm for DSC, MSD, HD and TRE, respectively. Robustness of our method to point cloud noise was evaluated by adding different levels of noise to the query point clouds. Our results demonstrated that the proposed method could rapidly perform MR-TRUS image registration with good registration accuracy and robustness.
View Article and Find Full Text PDF

Download full-text PDF

Source
http://dx.doi.org/10.1016/j.media.2020.101845DOI Listing
http://www.ncbi.nlm.nih.gov/pmc/articles/PMC7725979PMC
January 2021

Breast tumor segmentation in 3D automatic breast ultrasound using Mask scoring R-CNN.

Med Phys 2021 Jan 18;48(1):204-214. Epub 2020 Nov 18.

Department of Radiation Oncology and Winship Cancer Institute, Emory University, Atlanta, GA, 30322, USA.

Purpose: Automatic breast ultrasound (ABUS) imaging has become an essential tool in breast cancer diagnosis since it provides complementary information to other imaging modalities. Lesion segmentation on ABUS is a prerequisite step of breast cancer computer-aided diagnosis (CAD). This work aims to develop a deep learning-based method for breast tumor segmentation using three-dimensional (3D) ABUS automatically.

Methods: For breast tumor segmentation in ABUS, we developed a Mask scoring region-based convolutional neural network (R-CNN) that consists of five subnetworks, that is, a backbone, a regional proposal network, a region convolutional neural network head, a mask head, and a mask score head. A network block building direct correlation between mask quality and region class was integrated into a Mask scoring R-CNN based framework for the segmentation of new ABUS images with ambiguous regions of interest (ROIs). For segmentation accuracy evaluation, we retrospectively investigated 70 patients with breast tumor confirmed with needle biopsy and manually delineated on ABUS, of which 40 were used for fivefold cross-validation and 30 were used for hold-out test. The comparison between the automatic breast tumor segmentations and the manual contours was quantified by I) six metrics including Dice similarity coefficient (DSC), Jaccard index, 95% Hausdorff distance (HD95), mean surface distance (MSD), residual mean square distance (RMSD), and center of mass distance (CMD); II) Pearson correlation analysis and Bland-Altman analysis.

Results: The mean (median) DSC was 85% ± 10.4% (89.4%) and 82.1% ± 14.5% (85.6%) for cross-validation and hold-out test, respectively. The corresponding HD95, MSD, RMSD, and CMD of the two tests was 1.646 ± 1.191 and 1.665 ± 1.129 mm, 0.489 ± 0.406 and 0.475 ± 0.371 mm, 0.755 ± 0.755 and 0.751 ± 0.508 mm, and 0.672 ± 0.612 and 0.665 ± 0.729 mm. The mean volumetric difference (mean and ± 1.96 standard deviation) was 0.47 cc ([-0.77, 1.71)) for the cross-validation and 0.23 cc ([-0.23 0.69]) for hold-out test, respectively.

Conclusion: We developed a novel Mask scoring R-CNN approach for the automated segmentation of the breast tumor in ABUS images and demonstrated its accuracy for breast tumor segmentation. Our learning-based method can potentially assist the clinical CAD of breast cancer using 3D ABUS imaging.
View Article and Find Full Text PDF

Download full-text PDF

Source
http://dx.doi.org/10.1002/mp.14569DOI Listing
January 2021

Thyroid gland delineation in noncontrast-enhanced CT using deep convolutional neural networks.

Phys Med Biol 2020 Oct 28. Epub 2020 Oct 28.

Department of Radiology and Sciences Imaging Department of Radiology Oncology, Emory University, Atlanta, Georgia, UNITED STATES.

The purpose of this study it to develop a deep learning method for thyroid delineation with high accuracy, efficiency, and robustness in noncontrast-enhanced head and neck CTs. The cross-sectional analysis consisted of six tests, including randomized the cross-validation and hold-out experiments, tests of prediction accuracy between cancer and benign, and cross-gender were performed to evaluate the proposed deep-learning-based performance method. CT images of 1,977 patients with suspected thyroid carcinoma were retrospectively investigated. The automatically segmented thyroid gland volume was compared against physician-approved clinical contours using metrics, Pearson correlation, and Bland-Altman analysis. Quantitative metrics included: Dice similarity coefficient (DSC), sensitivity, specificity, Jaccard index (JAC), Hausdorff distance (HD), mean surface distance (MSD), residual mean square distance (RMSD), and the center of mass distance (CMD). The robustness of the proposed method was further tested using nonparametric the Kruskal-Wallis test to assess the equality of distribution of DSC values. The proposed method's accuracy remained high through all the tests, with the median DSC, JAC, sensitivity, and specificity is higher than 0.913, 0.839, 0.856, and 0.979, respectively. The proposed method also resulted in median MSD, RMSD, HD, and CMD, of less than 0.31mm, 0.48mm, 2.06mm, and 0.50mm, respectively. The MSD and RMSD were 0.40±0.29 mm and 0.70±0.46 mm, respectively. Concurrent testing of the proposed method with 3D U-Net and V-Net showed that the proposed method had significantly improved performance. The proposed deep-learning method achieved accurate and robust performance through six cross-sectional analysis tests.
View Article and Find Full Text PDF

Download full-text PDF

Source
http://dx.doi.org/10.1088/1361-6560/abc5a6DOI Listing
October 2020

Deep learning-based real-time volumetric imaging for lung stereotactic body radiation therapy: a proof of concept study.

Phys Med Biol 2020 12 18;65(23):235003. Epub 2020 Dec 18.

Department of Radiation Oncology and Winship Cancer Institute, Emory University, Atlanta, GA, 30322, United States of America. Co-first author.

Due to the inter- and intra- variation of respiratory motion, it is highly desired to provide real-time volumetric images during the treatment delivery of lung stereotactic body radiation therapy (SBRT) for accurate and active motion management. In this proof-of-concept study, we propose a novel generative adversarial network integrated with perceptual supervision to derive instantaneous volumetric images from a single 2D projection. Our proposed network, named TransNet, consists of three modules, i.e. encoding, transformation and decoding modules. Rather than only using image distance loss between the generated 3D images and the ground truth 3D CT images to supervise the network, perceptual loss in feature space is integrated into loss function to force the TransNet to yield accurate lung boundary. Adversarial supervision is also used to improve the realism of generated 3D images. We conducted a simulation study on 20 patient cases, who had received lung SBRT treatments in our institution and undergone 4D-CT simulation, and evaluated the efficacy and robustness of our method for four different projection angles, i.e. 0°, 30°, 60° and 90°. For each 3D CT image set of a breathing phase, we simulated its 2D projections at these angles. For each projection angle, a patient's 3D CT images of 9 phases and the corresponding 2D projection data were used to train our network for that specific patient, with the remaining phase used for testing. The mean absolute error of the 3D images obtained by our method are 99.3 ± 14.1 HU. The peak signal-to-noise ratio and structural similarity index metric within the tumor region of interest are 15.4 ± 2.5 dB and 0.839 ± 0.090, respectively. The center of mass distance between the manual tumor contours on the 3D images obtained by our method and the manual tumor contours on the corresponding 3D phase CT images are within 2.6 mm, with a mean value of 1.26 mm averaged over all the cases. Our method has also been validated in a simulated challenging scenario with increased respiratory motion amplitude and tumor shrinkage, and achieved acceptable results. Our experimental results demonstrate the feasibility and efficacy of our 2D-to-3D method for lung cancer patients, which provides a potential solution for in-treatment real-time on-board volumetric imaging for tumor tracking and dose delivery verification to ensure the effectiveness of lung SBRT treatment.
View Article and Find Full Text PDF

Download full-text PDF

Source
http://dx.doi.org/10.1088/1361-6560/abc303DOI Listing
December 2020

Machine learning in quantitative PET: A review of attenuation correction and low-count image reconstruction methods.

Phys Med 2020 Aug 29;76:294-306. Epub 2020 Jul 29.

Department of Radiation Oncology, Emory University, Atlanta, GA, USA; Winship Cancer Institute, Emory University, Atlanta, GA, USA. Electronic address:

The rapid expansion of machine learning is offering a new wave of opportunities for nuclear medicine. This paper reviews applications of machine learning for the study of attenuation correction (AC) and low-count image reconstruction in quantitative positron emission tomography (PET). Specifically, we present the developments of machine learning methodology, ranging from random forest and dictionary learning to the latest convolutional neural network-based architectures. For application in PET attenuation correction, two general strategies are reviewed: 1) generating synthetic CT from MR or non-AC PET for the purposes of PET AC, and 2) direct conversion from non-AC PET to AC PET. For low-count PET reconstruction, recent deep learning-based studies and the potential advantages over conventional machine learning-based methods are presented and discussed. In each application, the proposed methods, study designs and performance of published studies are listed and compared with a brief discussion. Finally, the overall contributions and remaining challenges are summarized.
View Article and Find Full Text PDF

Download full-text PDF

Source
http://dx.doi.org/10.1016/j.ejmp.2020.07.028DOI Listing
http://www.ncbi.nlm.nih.gov/pmc/articles/PMC7484241PMC
August 2020

CT-based multi-organ segmentation using a 3D self-attention U-net network for pancreatic radiotherapy.

Med Phys 2020 Sep 2;47(9):4316-4324. Epub 2020 Aug 2.

Department of Radiation Oncology and Winship Cancer Institute, Emory University, Atlanta, GA, 30322, USA.

Purpose: Segmentation of organs-at-risk (OARs) is a weak link in radiotherapeutic treatment planning process because the manual contouring action is labor-intensive and time-consuming. This work aimed to develop a deep learning-based method for rapid and accurate pancreatic multi-organ segmentation that can expedite the treatment planning process.

Methods: We retrospectively investigated one hundred patients with computed tomography (CT) simulation scanned and contours delineated. Eight OARs including large bowel, small bowel, duodenum, left kidney, right kidney, liver, spinal cord and stomach were the target organs to be segmented. The proposed three-dimensional (3D) deep attention U-Net is featured with a deep attention strategy to effectively differentiate multiple organs. Performance of the proposed method was evaluated using six metrics, including Dice similarity coefficient (DSC), sensitivity, specificity, Hausdorff distance 95% (HD95), mean surface distance (MSD) and residual mean square distance (RMSD).

Results: The contours generated by the proposed method closely resemble the ground-truth manual contours, as evidenced by encouraging quantitative results in terms of DSC, sensitivity, specificity, HD95, MSD and RMSD. For DSC, mean values of 0.91 ± 0.03, 0.89 ± 0.06, 0.86 ± 0.06, 0.95 ± 0.02, 0.95 ± 0.02, 0.96 ± 0.01, 0.87 ± 0.05 and 0.93 ± 0.03 were achieved for large bowel, small bowel, duodenum, left kidney, right kidney, liver, spinal cord and stomach, respectively.

Conclusions: The proposed method could significantly expedite the treatment planning process by rapidly segmenting multiple OARs. The method could potentially be used in pancreatic adaptive radiotherapy to increase dose delivery accuracy and minimize gastrointestinal toxicity.
View Article and Find Full Text PDF

Download full-text PDF

Source
http://dx.doi.org/10.1002/mp.14386DOI Listing
September 2020

Head and neck multi-organ auto-segmentation on CT images aided by synthetic MRI.

Med Phys 2020 Sep 2;47(9):4294-4302. Epub 2020 Aug 2.

Department of Radiation Oncology and Winship Cancer Institute, Emory University, Atlanta, GA, 30322, USA.

Purpose: Because the manual contouring process is labor-intensive and time-consuming, segmentation of organs-at-risk (OARs) is a weak link in radiotherapy treatment planning process. Our goal was to develop a synthetic MR (sMR)-aided dual pyramid network (DPN) for rapid and accurate head and neck multi-organ segmentation in order to expedite the treatment planning process.

Methods: Forty-five patients' CT, MR, and manual contours pairs were included as our training dataset. Nineteen OARs were target organs to be segmented. The proposed sMR-aided DPN method featured a deep attention strategy to effectively segment multiple organs. The performance of sMR-aided DPN method was evaluated using five metrics, including Dice similarity coefficient (DSC), Hausdorff distance 95% (HD95), mean surface distance (MSD), residual mean square distance (RMSD), and volume difference. Our method was further validated using the 2015 head and neck challenge data.

Results: The contours generated by the proposed method closely resemble the ground truth manual contours, as evidenced by encouraging quantitative results in terms of DSC using the 2015 head and neck challenge data. Mean DSC values of 0.91 ± 0.02, 0.73 ± 0.11, 0.96 ± 0.01, 0.78 ± 0.09/0.78 ± 0.11, 0.88 ± 0.04/0.88 ± 0.06 and 0.86 ± 0.08/0.85 ± 0.1 were achieved for brain stem, chiasm, mandible, left/right optic nerve, left/right parotid, and left/right submandibular, respectively.

Conclusions: We demonstrated the feasibility of sMR-aided DPN for head and neck multi-organ delineation on CT images. Our method has shown superiority over the other methods on the 2015 head and neck challenge data results. The proposed method could significantly expedite the treatment planning process by rapidly segmenting multiple OARs.
View Article and Find Full Text PDF

Download full-text PDF

Source
http://dx.doi.org/10.1002/mp.14378DOI Listing
September 2020

Automatic multi-needle localization in ultrasound images using large margin mask RCNN for ultrasound-guided prostate brachytherapy.

Phys Med Biol 2020 10 9;65(20):205003. Epub 2020 Oct 9.

Department of Radiation Oncology and Winship Cancer Institute, Emory University, Atlanta, GA, United States of America.

Multi-needle localization in ultrasound (US) images is a crucial step of treatment planning for US-guided prostate brachytherapy. However, current computer-aided technologies are mostly focused on single-needle digitization, while manual digitization is labor intensive and time consuming. In this paper, we proposed a deep learning-based workflow for fast automatic multi-needle digitization, including needle shaft detection and needle tip detection. The major workflow is composed of two components: a large margin mask R-CNN model (LMMask R-CNN), which adopts the lager margin loss to reformulate Mask R-CNN for needle shaft localization, and a needle based density-based spatial clustering of application with noise algorithm which integrates priors to model a needle in an iteration for a needle shaft refinement and tip detections. Besides, we use the skipping connection in neural network architecture to improve the supervision in hidden layers. Our workflow was evaluated on 23 patients who underwent US-guided high-dose-rate (HDR) prostrate brachytherapy with 339 needles being tested in total. Our method detected 98% of the needles with 0.091 ± 0.043 mm shaft error and 0.330 ± 0.363 mm tip error. Compared with only using Mask R-CNN and only using LMMask R-CNN, the proposed method gains a significant improvement on both shaft error and tip error. The proposed method automatically digitizes needles per patient with in a second. It streamlines the workflow of transrectal ultrasound-guided HDR prostate brachytherapy and paves the way for the development of real-time treatment planning system that is expected to further elevate the quality and outcome of HDR prostate brachytherapy.
View Article and Find Full Text PDF

Download full-text PDF

Source
http://dx.doi.org/10.1088/1361-6560/aba410DOI Listing
October 2020

Cone-beam CT-derived relative stopping power map generation via deep learning for proton radiotherapy.

Med Phys 2020 Sep 27;47(9):4416-4427. Epub 2020 Jul 27.

Department of Radiation Oncology and Winship Cancer Institute, Emory University, Atlanta, GA, 30322, USA.

Purpose: In intensity-modulated proton therapy (IMPT), protons are used to deliver highly conformal dose distributions, targeting tumors, and sparing organs-at-risk. However, due to uncertainties in both patient setup and relative stopping power (RSP) calculation, margins are added to the treatment volume during treatment planning, leading to higher doses to normal tissues. Cone-beam computed tomography (CBCT) images are taken daily before treatment; however, the poor image quality of CBCT limits the use of these images for online dose calculation. In this work, we use a deep-learning-based method to predict RSP maps from daily CBCT images, allowing for online dose calculation in a step toward adaptive radiation therapy.

Methods: Twenty-three head-and-neck cancer patients were simulated using a Siemens TwinBeam dual-energy CT (DECT) scanner. Mixed-energy scans (equivalent to a 120 kVp single-energy CT scan) were converted to RSP maps for treatment planning. Cone-beam computed tomography images were taken on the first day of treatment, and the planning RSP maps were registered to these images. A deep learning network based on a cycle-GAN architecture, relying on a compound loss function designed for structural and contrast preservation, was then trained to create an RSP map from a CBCT image. Leave-one-out and holdout cross validations were used for evaluation, and mean absolute error (MAE), mean error (ME), peak signal-to-noise ratio (PSNR), and structural similarity (SSIM) were used to quantify the differences between the CT-based and CBCT-based RSP maps. The proposed method was compared to a deformable image registration-based method which was taken as the ground truth and two other deep learning methods. For one patient who underwent resimulation, the new planning RSP maps and CBCT images were used for further evaluation and validation.

Results: The CBCT-based RSP generation method was evaluated on 23 head-and-neck cancer patients. From leave-one-out testing, the MAE between CT-based and CBCT-based RSP was 0.06 ± 0.01 and the ME was -0.01 ± 0.01. The proposed method statistically outperformed the comparison DL methods in terms of MAE and ME when compared to the planning CT. In terms of dose comparison, the mean gamma passing rate at 3%/3 mm was 94% when three-dimensional (3D) gamma index was calculated per plan and 96% when gamma index was calculated per field.

Conclusions: The proposed method provides sufficiently accurate RSP map generation from CBCT images, allowing for evaluation of daily dose based on CBCT and possibly allowing for CBCT-guided adaptive treatment planning for IMPT.
View Article and Find Full Text PDF

Download full-text PDF

Source
http://dx.doi.org/10.1002/mp.14347DOI Listing
September 2020

Automatic multi-catheter detection using deeply supervised convolutional neural network in MRI-guided HDR prostate brachytherapy.

Med Phys 2020 Sep 15;47(9):4115-4124. Epub 2020 Jun 15.

Department of Radiation Oncology and Winship Cancer Institute, Emory University, Atlanta, GA, 30332, USA.

Purpose: High-dose-rate (HDR) brachytherapy is an established technique to be used as monotherapy option or focal boost in conjunction with external beam radiation therapy (EBRT) for treating prostate cancer. Radiation source path reconstruction is a critical procedure in HDR treatment planning. Manually identifying the source path is labor intensive and time inefficient. In recent years, magnetic resonance imaging (MRI) has become a valuable imaging modality for image-guided HDR prostate brachytherapy due to its superb soft-tissue contrast for target delineation and normal tissue contouring. The purpose of this study is to investigate a deep-learning-based method to automatically reconstruct multiple catheters in MRI for prostate cancer HDR brachytherapy treatment planning.

Methods: Attention gated U-Net incorporated with total variation (TV) regularization model was developed for multi-catheter segmentation in MRI. The attention gates were used to improve the accuracy of identifying small catheter points, while TV regularization was adopted to encode the natural spatial continuity of catheters into the model. The model was trained using the binary catheter annotation images offered by experienced physicists as ground truth paired with original MRI images. After the network was trained, MR images of a new prostate cancer patient receiving HDR brachytherapy were fed into the model to predict the locations and shapes of all the catheters. Quantitative assessments of our proposed method were based on catheter shaft and tip errors compared to the ground truth.

Results: Our method detected 299 catheters from 20 patients receiving HDR prostate brachytherapy with a catheter tip error of 0.37 ± 1.68 mm and a catheter shaft error of 0.93 ± 0.50 mm. For detection of catheter tips, our method resulted in 87% of the catheter tips within an error of less than ± 2.0 mm, and more than 71% of the tips can be localized within an absolute error of no >1.0 mm. For catheter shaft localization, 97% of catheters were detected with an error of <2.0 mm, while 63% were within 1.0 mm.

Conclusions: In this study, we proposed a novel multi-catheter detection method to precisely localize the tips and shafts of catheters in three-dimensional MRI images of HDR prostate brachytherapy. It paves the way for elevating the quality and outcome of MRI-guided HDR prostate brachytherapy.
View Article and Find Full Text PDF

Download full-text PDF

Source
http://dx.doi.org/10.1002/mp.14307DOI Listing
http://www.ncbi.nlm.nih.gov/pmc/articles/PMC7708403PMC
September 2020

Label-driven magnetic resonance imaging (MRI)-transrectal ultrasound (TRUS) registration using weakly supervised learning for MRI-guided prostate radiotherapy.

Phys Med Biol 2020 06 26;65(13):135002. Epub 2020 Jun 26.

Department of Radiation Oncology, Emory University, Atlanta, Georgia, United States of America.

Registration and fusion of magnetic resonance imaging (MRI) and transrectal ultrasound (TRUS) of the prostate can provide guidance for prostate brachytherapy. However, accurate registration remains a challenging task due to the lack of ground truth regarding voxel-level spatial correspondence, limited field of view, low contrast-to-noise ratio, and signal-to-noise ratio in TRUS. In this study, we proposed a fully automated deep learning approach based on a weakly supervised method to address these issues. We employed deep learning techniques to combine image segmentation and registration, including affine and nonrigid registration, to perform an automated deformable MRI-TRUS registration. To start with, we trained two separate fully convolutional neural networks (CNNs) to perform a pixel-wise prediction for MRI and TRUS prostate segmentation. Then, to provide the initialization of the registration, a 2D CNN was used to register MRI-TRUS prostate images using an affine registration. After that, a 3D UNET-like network was applied for nonrigid registration. For both the affine and nonrigid registration, pairs of MRI-TRUS labels were concatenated and fed into the neural networks for training. Due to the unavailability of ground-truth voxel-level correspondences and the lack of accurate intensity-based image similarity measures, we propose to use prostate label-derived volume overlaps and surface agreements as an optimization objective function for weakly supervised network training. Specifically, we proposed a hybrid loss function that integrated a Dice loss, a surface-based loss, and a bending energy regularization loss for the nonrigid registration. The Dice and surface-based losses were used to encourage the alignment of the prostate label between the MRI and the TRUS. The bending energy regularization loss was used to achieve a smooth deformation field. Thirty-six sets of patient data were used to test our registration method. The image registration results showed that the deformed MR image aligned well with the TRUS image, as judged by corresponding cysts and calcifications in the prostate. The quantitative results showed that our method produced a mean target registration error (TRE) of 2.53 ± 1.39 mm and a mean Dice loss of 0.91 ± 0.02. The mean surface distance (MSD) and Hausdorff distance (HD) between the registered MR prostate shape and TRUS prostate shape were 0.88 and 4.41 mm, respectively. This work presents a deep learning-based, weakly supervised network for accurate MRI-TRUS image registration. Our proposed method has achieved promising registration performance in terms of Dice loss, TRE, MSD, and HD.
View Article and Find Full Text PDF

Download full-text PDF

Source
http://dx.doi.org/10.1088/1361-6560/ab8cd6DOI Listing
http://www.ncbi.nlm.nih.gov/pmc/articles/PMC7771987PMC
June 2020

Pelvic multi-organ segmentation on cone-beam CT for prostate adaptive radiotherapy.

Med Phys 2020 Aug 11;47(8):3415-3422. Epub 2020 May 11.

Department of Radiation Oncology and Winship Cancer Institute, Emory University, Atlanta, GA, 30322, USA.

Background And Purpose: The purpose of this study is to develop a deep learning-based approach to simultaneously segment five pelvic organs including prostate, bladder, rectum, left and right femoral heads on cone-beam CT (CBCT), as required elements for prostate adaptive radiotherapy planning.

Materials And Methods: We propose to utilize both CBCT and CBCT-based synthetic MRI (sMRI) for the segmentation of soft tissue and bony structures, as they provide complementary information for pelvic organ segmentation. CBCT images have superior bony structure contrast and sMRIs have superior soft tissue contrast. Prior to segmentation, sMRI was generated using a cycle-consistent adversarial networks (CycleGAN), which was trained using paired CBCT-MR images. To combine the advantages of both CBCT and sMRI, we developed a cross-modality attention pyramid network with late feature fusion. Our method processes CBCT and sMRI inputs separately to extract CBCT-specific and sMRI-specific features prior to combining them in a late-fusion network for final segmentation. The network was trained and tested using 100 patients' datasets, with each dataset including the CBCT and manual physician contours. For comparison, we trained another two networks with different network inputs and architectures. The segmentation results were compared to manual contours for evaluations.

Results: For the proposed method, dice similarity coefficients and mean surface distances between the segmentation results and the ground truth were 0.96 ± 0.03, 0.65 ± 0.67 mm; 0.91 ± 0.08, 0.93 ± 0.96 mm; 0.93 ± 0.04, 0.72 ± 0.61 mm; 0.95 ± 0.05, 1.05 ± 1.40 mm; and 0.95 ± 0.05, 1.08 ± 1.48 mm for bladder, prostate, rectum, left and right femoral heads, respectively. As compared to the other two competing methods, our method has shown superior performance in terms of the segmentation accuracy.

Conclusion: We developed a deep learning-based segmentation method to rapidly and accurately segment five pelvic organs simultaneously from daily CBCTs. The proposed method could be used in the clinic to support rapid target and organs-at-risk contouring for prostate adaptive radiation therapy.
View Article and Find Full Text PDF

Download full-text PDF

Source
http://dx.doi.org/10.1002/mp.14196DOI Listing
http://www.ncbi.nlm.nih.gov/pmc/articles/PMC7429321PMC
August 2020

Deep learning in medical image registration: a review.

Phys Med Biol 2020 10 22;65(20):20TR01. Epub 2020 Oct 22.

Department of Radiation Oncology, Emory University, Atlanta, GA, United States of America.

This paper presents a review of deep learning (DL)-based medical image registration methods. We summarized the latest developments and applications of DL-based registration methods in the medical field. These methods were classified into seven categories according to their methods, functions and popularity. A detailed review of each category was presented, highlighting important contributions and identifying specific challenges. A short assessment was presented following the detailed review of each category to summarize its achievements and future potential. We provided a comprehensive comparison among DL-based methods for lung and brain registration using benchmark datasets. Lastly, we analyzed the statistics of all the cited works from various aspects, revealing the popularity and future trend of DL-based medical image registration.
View Article and Find Full Text PDF

Download full-text PDF

Source
http://dx.doi.org/10.1088/1361-6560/ab843eDOI Listing
http://www.ncbi.nlm.nih.gov/pmc/articles/PMC7759388PMC
October 2020

Automatic segmentation and quantification of epicardial adipose tissue from coronary computed tomography angiography.

Phys Med Biol 2020 05 11;65(9):095012. Epub 2020 May 11.

Department of Radiation Oncology and Winship Cancer Institute, Emory University, Atlanta, GA 30322, United States of America. Co-first author.

Epicardial adipose tissue (EAT) is a visceral fat deposit, that's known for its association with factors, such as obesity, diabetes mellitus, age, and hypertension. Segmentation of the EAT in a fast and reproducible way is important for the interpretation of its role as an independent risk marker intricate. However, EAT has a variable distribution, and various diseases may affect the volume of the EAT, which can increase the complexity of the already time-consuming manual segmentation work. We propose a 3D deep attention U-Net method to automatically segment the EAT from coronary computed tomography angiography (CCTA). Five-fold cross-validation and hold-out experiments were used to evaluate the proposed method through a retrospective investigation of 200 patients. The automatically segmented EAT volume was compared with physician-approved clinical contours. Quantitative metrics used were the Dice similarity coefficient (DSC), sensitivity, specificity, Jaccard index (JAC), Hausdorff distance (HD), mean surface distance (MSD), residual mean square distance (RMSD), and the center of mass distance (CMD). For cross-validation, the median DSC, sensitivity, and specificity were 92.7%, 91.1%, and 95.1%, respectively, with JAC, HD, CMD, MSD, and RMSD are 82.9% ± 8.8%, 3.77 ± 1.86 mm, 1.98 ± 1.50 mm, 0.37 ± 0.24 mm, and 0.65 ± 0.37 mm, respectively. For the hold-out test, the accuracy of the proposed method remained high. We developed a novel deep learning-based approach for the automated segmentation of the EAT on CCTA images. We demonstrated the high accuracy of the proposed learning-based segmentation method through comparison with ground truth contour of 200 clinical patient cases using 8 quantitative metrics, Pearson correlation, and Bland-Altman analysis. Our automatic EAT segmentation results show the potential of the proposed method to be used in computer-aided diagnosis of coronary artery diseases (CADs) in clinical settings.
View Article and Find Full Text PDF

Download full-text PDF

Source
http://dx.doi.org/10.1088/1361-6560/ab8077DOI Listing
May 2020

Multi-needle Localization with Attention U-Net in US-guided HDR Prostate Brachytherapy.

Med Phys 2020 Jul 3;47(7):2735-2745. Epub 2020 Apr 3.

Department of Radiation Oncology and Winship Cancer Institute, Emory University, Atlanta, GA, USA.

Purpose: Ultrasound (US)-guided high dose rate (HDR) prostate brachytherapy requests the clinicians to place HDR needles (catheters) into the prostate gland under transrectal US (TRUS) guidance in the operating room. The quality of the subsequent radiation treatment plan is largely dictated by the needle placements, which varies upon the experience level of the clinicians and the procedure protocols. Real-time plan dose distribution, if available, could be a vital tool to provide more subjective assessment of the needle placements, hence potentially improving the radiation plan quality and the treatment outcome. However, due to low signal-to-noise ratio (SNR) in US imaging, real-time multi-needle segmentation in 3D TRUS, which is the major obstacle for real-time dose mapping, has not been realized to date. In this study, we propose a deep learning-based method that enables accurate and real-time digitization of the multiple needles in the 3D TRUS images of HDR prostate brachytherapy.

Methods: A deep learning model based on the U-Net architecture was developed to segment multiple needles in the 3D TRUS images. Attention gates were considered in our model to improve the prediction on the small needle points. Furthermore, the spatial continuity of needles was encoded into our model with total variation (TV) regularization. The combined network was trained on 3D TRUS patches with the deep supervision strategy, where the binary needle annotation images were provided as ground truth. The trained network was then used to localize and segment the HDR needles for a new patient's TRUS images. We evaluated our proposed method based on the needle shaft and tip errors against manually defined ground truth and compared our method with other state-of-art methods (U-Net and deeply supervised attention U-Net).

Results: Our method detected 96% needles of 339 needles from 23 HDR prostate brachytherapy patients with 0.290 ± 0.236 mm at shaft error and 0.442 ± 0.831 mm at tip error. For shaft localization, our method resulted in 96% localizations with less than 0.8 mm error (needle diameter is 1.67 mm), while for tip localization, our method resulted in 75% needles with 0 mm error and 21% needles with 2 mm error (TRUS image slice thickness is 2 mm). No significant difference is observed (P = 0.83) on tip localization between our results with the ground truth. Compared with U-Net and deeply supervised attention U-Net, the proposed method delivers a significant improvement on both shaft error and tip error (P < 0.05).

Conclusions: We proposed a new segmentation method to precisely localize the tips and shafts of multiple needles in 3D TRUS images of HDR prostate brachytherapy. The 3D rendering of the needles could help clinicians to evaluate the needle placements. It paves the way for the development of real-time plan dose assessment tools that can further elevate the quality and outcome of HDR prostate brachytherapy.
View Article and Find Full Text PDF

Download full-text PDF

Source
http://dx.doi.org/10.1002/mp.14128DOI Listing
http://www.ncbi.nlm.nih.gov/pmc/articles/PMC7759387PMC
July 2020

CBCT-based synthetic CT generation using deep-attention cycleGAN for pancreatic adaptive radiotherapy.

Med Phys 2020 Jun 28;47(6):2472-2483. Epub 2020 Mar 28.

Department of Radiation Oncology and Winship Cancer Institute, Emory University, Atlanta, GA, 30322, USA.

Purpose: Current clinical application of cone-beam CT (CBCT) is limited to patient setup. Imaging artifacts and Hounsfield unit (HU) inaccuracy make the process of CBCT-based adaptive planning presently impractical. In this study, we developed a deep-learning-based approach to improve CBCT image quality and HU accuracy for potential extended clinical use in CBCT-guided pancreatic adaptive radiotherapy.

Methods: Thirty patients previously treated with pancreas SBRT were included. The CBCT acquired prior to the first fraction of treatment was registered to the planning CT for training and generation of synthetic CT (sCT). A self-attention cycle generative adversarial network (cycleGAN) was used to generate CBCT-based sCT. For the cohort of 30 patients, the CT-based contours and treatment plans were transferred to the first fraction CBCTs and sCTs for dosimetric comparison.

Results: At the site of abdomen, mean absolute error (MAE) between CT and sCT was 56.89 ± 13.84 HU, comparing to 81.06 ± 15.86 HU between CT and the raw CBCT. No significant differences (P > 0.05) were observed in the PTV and OAR dose-volume-histogram (DVH) metrics between the CT- and sCT-based plans, while significant differences (P < 0.05) were found between the CT- and the CBCT-based plans.

Conclusions: The image similarity and dosimetric agreement between the CT and sCT-based plans validated the dose calculation accuracy carried by sCT. The CBCT-based sCT approach can potentially increase treatment precision and thus minimize gastrointestinal toxicity.
View Article and Find Full Text PDF

Download full-text PDF

Source
http://dx.doi.org/10.1002/mp.14121DOI Listing
http://www.ncbi.nlm.nih.gov/pmc/articles/PMC7762616PMC
June 2020

4D-CT deformable image registration using multiscale unsupervised deep learning.

Phys Med Biol 2020 04 20;65(8):085003. Epub 2020 Apr 20.

Department of Radiation Oncology, Winship Cancer Institute, Emory University, Atlanta, GA, 30322.

Deformable image registration (DIR) of 4D-CT images is important in multiple radiation therapy applications including motion tracking of soft tissue or fiducial markers, target definition, image fusion, dose accumulation and treatment response evaluations. It is very challenging to accurately and quickly register 4D-CT abdominal images due to its large appearance variances and bulky sizes. In this study, we proposed an accurate and fast multi-scale DIR network (MS-DIRNet) for abdominal 4D-CT registration. MS-DIRNet consists of a global network (GlobalNet) and local network (LocalNet). GlobalNet was trained using down-sampled whole image volumes while LocalNet was trained using sampled image patches. MS-DIRNet consists of a generator and a discriminator. The generator was trained to directly predict a deformation vector field (DVF) based on the moving and target images. The generator was implemented using convolutional neural networks with multiple attention gates. The discriminator was trained to differentiate the deformed images from the target images to provide additional DVF regularization. The loss function of MS-DIRNet includes three parts which are image similarity loss, adversarial loss and DVF regularization loss. The MS-DIRNet was trained in a completely unsupervised manner meaning that ground truth DVFs are not needed. Different from traditional DIRs that calculate DVF iteratively, MS-DIRNet is able to calculate the final DVF in a single forward prediction which could significantly expedite the DIR process. The MS-DIRNet was trained and tested on 25 patients' 4D-CT datasets using five-fold cross validation. For registration accuracy evaluation, target registration errors (TREs) of MS-DIRNet were compared to clinically used software. Our results showed that the MS-DIRNet with an average TRE of 1.2 ± 0.8 mm outperformed the commercial software with an average TRE of 2.5 ± 0.8 mm in 4D-CT abdominal DIR, demonstrating the superior performance of our method in fiducial marker tracking and overall soft tissue alignment.
View Article and Find Full Text PDF

Download full-text PDF

Source
http://dx.doi.org/10.1088/1361-6560/ab79c4DOI Listing
http://www.ncbi.nlm.nih.gov/pmc/articles/PMC7775640PMC
April 2020

A preliminary study on a multiresolution-level inverse planning approach for Gamma Knife radiosurgery.

Med Phys 2020 Apr 26;47(4):1523-1532. Epub 2020 Feb 26.

Department of Radiation Oncology, Emory University, Atlanta, GA, 30022, USA.

Purpose: With many plan variables to determine, manual forward planning for Gamma Knife (GK) radiosurgery is very challenging. Inverse planning eases GK planning by determining the variables via solving an optimization problem. However, due to the vast search space, most inverse planning algorithms, including the one provided in Leksell GammaPlan (LGP) treatment planning system, have to predetermine the isocenter locations using some geometric methods and then optimize the shot shapes and durations at these preselected isocenters. This sequential planning scheme does not necessarily lead to optimal isocenter locations and hence globally optimal plans. In this study, we proposed a multiresolution-level (MRL) inverse planning approach, attempting to approach this large-scale GK optimization problem via an iterative method.

Methods: In our MRL approach, several rounds of optimizations were performed with a progressively increased resolution used for isocenter candidates. At each round, an optimization problem was solved to optimize the beam-on time for each collimator and sector at each isocenter candidate. The isocenters that obtained nonzero beam-on times at the previous round and their neighbors on a finer resolution were used as new isocenter candidates for the next round of optimization. After plan optimization, shot sequencing was performed to group the optimized sectors to deliverable composite shots.

Results: We have tested our MRL approach on six GK cases previously treated in our institution. For the five cases that have a single target, with similar target coverage obtained, our MRL inverse planning approach achieved better plan quality compared to manual forward planning and LGP inverse planning, with higher selectivity (0.73 ± 0.07 vs 0.72 ± 0.08 and 0.62 ± 0.10), lower gradient index (2.71 ± 0.25 vs 2.78 ± 0.24 and 3.00 ± 0.29), lower brainstem D dose (6.10 ± 4.46 Gy vs 8.87 ± 4.82 Gy and 9.17 ± 3.80 Gy), and shorter total beam-on time (62.1 ± 22.9 min vs 83.6 ± 28.2 min and 70.7 ± 16.7 min). For the case that have six targets, compared with manual planning and LGP inverse planning, our MRL approach achieved higher selectivity (0.68 vs 0.57 and 0.47) and lower gradient index (3.77 vs 4.51 and 5.11). The beam-on time of our plan was slightly longer than manual planning and LGP inverse planning (206.4 min vs 204.7 min and 199.3 min). We have also performed sector duration optimization at the isocenters determined by manual planning or the LGP inverse planning, and the resulting plan qualities were found to be inferior to our MRL approach for all the six cases.

Conclusions: This preliminary study has demonstrated the efficacy and feasibility of our MRL inverse planning approach for GK radiosurgery.
View Article and Find Full Text PDF

Download full-text PDF

Source
http://dx.doi.org/10.1002/mp.14078DOI Listing
April 2020

LungRegNet: An unsupervised deformable image registration method for 4D-CT lung.

Med Phys 2020 Apr 26;47(4):1763-1774. Epub 2020 Feb 26.

Department of Radiation Oncology and Winship Cancer Institute, Emory University, Atlanta, GA, 30322, USA.

Purpose: To develop an accurate and fast deformable image registration (DIR) method for four-dimensional computed tomography (4D-CT) lung images. Deep learning-based methods have the potential to quickly predict the deformation vector field (DVF) in a few forward predictions. We have developed an unsupervised deep learning method for 4D-CT lung DIR with excellent performances in terms of registration accuracies, robustness, and computational speed.

Methods: A fast and accurate 4D-CT lung DIR method, namely LungRegNet, was proposed using deep learning. LungRegNet consists of two subnetworks which are CoarseNet and FineNet. As the name suggests, CoarseNet predicts large lung motion on a coarse scale image while FineNet predicts local lung motion on a fine scale image. Both the CoarseNet and FineNet include a generator and a discriminator. The generator was trained to directly predict the DVF to deform the moving image. The discriminator was trained to distinguish the deformed images from the original images. CoarseNet was first trained to deform the moving images. The deformed images were then used by the FineNet for FineNet training. To increase the registration accuracy of the LungRegNet, we generated vessel-enhanced images by generating pulmonary vasculature probability maps prior to the network prediction.

Results: We performed fivefold cross validation on ten 4D-CT datasets from our department. To compare with other methods, we also tested our method using separate 10 DIRLAB datasets that provide 300 manual landmark pairs per case for target registration error (TRE) calculation. Our results suggested that LungRegNet has achieved better registration accuracy in terms of TRE than other deep learning-based methods available in the literature on DIRLAB datasets. Compared to conventional DIR methods, LungRegNet could generate comparable registration accuracy with TRE smaller than 2 mm. The integration of both the discriminator and pulmonary vessel enhancements into the network was crucial to obtain high registration accuracy for 4D-CT lung DIR. The mean and standard deviation of TRE were 1.00 ± 0.53 mm and 1.59 ± 1.58 mm on our datasets and DIRLAB datasets respectively.

Conclusions: An unsupervised deep learning-based method has been developed to rapidly and accurately register 4D-CT lung images. LungRegNet has outperformed its deep-learning-based peers and achieved excellent registration accuracy in terms of TRE.
View Article and Find Full Text PDF

Download full-text PDF

Source
http://dx.doi.org/10.1002/mp.14065DOI Listing
http://www.ncbi.nlm.nih.gov/pmc/articles/PMC7165051PMC
April 2020

Automated left ventricular myocardium segmentation using 3D deeply supervised attention U-net for coronary computed tomography angiography; CT myocardium segmentation.

Med Phys 2020 Apr 29;47(4):1775-1785. Epub 2020 Feb 29.

Department of Radiation Oncology and Winship Cancer Institute, Emory University, Atlanta, GA, 30322, USA.

Purpose: Segmentation of left ventricular myocardium (LVM) in coronary computed tomography angiography (CCTA) is important for diagnosis of cardiovascular diseases. Due to poor image contrast and large variation in intensity and shapes, LVM segmentation for CCTA is a challenging task. The purpose of this work is to develop a region-based deep learning method to automatically detect and segment the LVM solely based on CCTA images.

Methods: We developed a 3D deeply supervised U-Net, which incorporates attention gates (AGs) to focus on the myocardial boundary structures, to segment LVM contours from CCTA. The deep attention U-Net (DAU-Net) was trained on the patients' CCTA images, with a manual contour-derived binary mask used as the learning-based target. The network was supervised by a hybrid loss function, which combined logistic loss and Dice loss to simultaneously measure the similarities and discrepancies between the prediction and training datasets. To evaluate the accuracy of the segmentation, we retrospectively investigated 100 patients with suspected or confirmed coronary artery disease (CAD). The LVM volume was segmented by the proposed method and compared with physician-approved clinical contours. Quantitative metrics used were Dice similarity coefficient (DSC), Hausdorff distance (HD), mean surface distance (MSD), residual mean square distance (RMSD), the center of mass distance (CMD), and volume difference (VOD).

Results: The proposed method created contours with very good agreement to the ground truth contours. Our proposed segmentation approach is benchmarked primarily using fivefold cross validation. Model prediction correlated and agreed well with manual contour. The mean DSC of the contours delineated by our method was 91.6% among all patients. The resultant HD was 6.840 ± 4.410 mm. The proposed method also resulted in a small CMD (1.058 ± 1.245 mm) and VOD (1.640 ± 1.777 cc). Among all patients, the MSD and RMSD were 0.433 ± 0.209 mm and 0.724 ± 0.375 mm, respectively, between ground truth and LVM volume resulting from the proposed method.

Conclusions: We developed a novel deep learning-based approach for the automated segmentation of the LVM on CCTA images. We demonstrated the high accuracy of the proposed learning-based segmentation method through comparison with ground truth contour of 100 clinical patient cases using six quantitative metrics. These results show the potential of using automated LVM segmentation for computer-aided delineation of CADs in the clinical setting.
View Article and Find Full Text PDF

Download full-text PDF

Source
http://dx.doi.org/10.1002/mp.14066DOI Listing
April 2020

MRI-Based Proton Treatment Planning for Base of Skull Tumors.

Int J Part Ther 2019 30;6(2):12-25. Epub 2019 Sep 30.

Department of Radiation Oncology and Winship Cancer Institute, Emory University, Atlanta, GA 30322, USA.

Purpose: To introduce a novel, deep-learning method to generate synthetic computed tomography (SCT) scans for proton treatment planning and evaluate its efficacy.

Materials And Methods: 50 Patients with base of skull tumors were divided into 2 nonoverlapping training and study cohorts. Computed tomography and magnetic resonance imaging pairs for patients in the training cohort were used for training our novel 3-dimensional generative adversarial network (cycleGAN) algorithm. Upon completion of the training phase, SCT scans for patients in the study cohort were predicted based on their magnetic resonance images only. The SCT scans obtained were compared against the corresponding original planning computed tomography scans as the ground truth, and mean absolute errors (in Hounsfield units [HU]) and normalized cross-correlations were calculated. Proton plans of 45 Gy in 25 fractions with 2 beams per plan were generated for the patients based on their planning computed tomographies and recalculated on SCT scans. Dose-volume histogram endpoints were compared. A γ-index analysis along 3 cardinal planes intercepting at the isocenter was performed. Proton distal range along each beam was calculated.

Results: Image quality metrics show agreement between the generated SCT scans and the ground truth with mean absolute error values ranging from 38.65 to 65.12 HU and an average of 54.55 ± 6.81 HU and a normalized cross-correlation average of 0.96 ± 0.01. The dosimetric evaluation showed no statistically significant differences ( > 0.05) within planning target volumes for dose-volume histogram endpoints and other metrics studied, with the exception of the dose covering 95% of the target volume, with a relative difference of 0.47%. The γ-index analysis showed an average passing rate of 98% with a 10% threshold and 2% and 2-mm criteria. Proton ranges of 48 of 50 beams (96%) in this study were within clinical tolerance adopted by 4 institutions.

Conclusions: This study shows our method is capable of generating SCT scans with acceptable image quality, dose distribution agreement, and proton distal range compared with the ground truth. Our results set a promising approach for magnetic resonance imaging-based proton treatment planning.
View Article and Find Full Text PDF

Download full-text PDF

Source
http://dx.doi.org/10.14338/IJPT-19-00062.1DOI Listing
http://www.ncbi.nlm.nih.gov/pmc/articles/PMC6986397PMC
September 2019