Publications by authors named "Samuel Haaf"

3 Publications

  • Page 1 of 1

A convolutional neural network algorithm for automatic segmentation of head and neck organs at risk using deep lifelong learning.

Med Phys 2019 May 4;46(5):2204-2213. Epub 2019 Apr 4.

Department of Radiation Oncology, University of California, San Francisco, CA, 94115, USA.

Purpose: This study suggests a lifelong learning-based convolutional neural network (LL-CNN) algorithm as a superior alternative to single-task learning approaches for automatic segmentation of head and neck (OARs) organs at risk.

Methods And Materials: Lifelong learning-based convolutional neural network was trained on twelve head and neck OARs simultaneously using a multitask learning framework. Once the weights of the shared network were established, the final multitask convolutional layer was replaced by a single-task convolutional layer. The single-task transfer learning network was trained on each OAR separately with early stoppage. The accuracy of LL-CNN was assessed based on Dice score and root-mean-square error (RMSE) compared to manually delineated contours set as the gold standard. LL-CNN was compared with 2D-UNet, 3D-UNet, a single-task CNN (ST-CNN), and a pure multitask CNN (MT-CNN). Training, validation, and testing followed Kaggle competition rules, where 160 patients were used for training, 20 were used for internal validation, and 20 in a separate test set were used to report final prediction accuracies.

Results: On average contours generated with LL-CNN had higher Dice coefficients and lower RMSE than 2D-UNet, 3D-Unet, ST- CNN, and MT-CNN. LL-CNN required ~72 hrs to train using a distributed learning framework on 2 Nvidia 1080Ti graphics processing units. LL-CNN required 20 s to predict all 12 OARs, which was approximately as fast as the fastest alternative methods with the exception of MT-CNN.

Conclusions: This study demonstrated that for head and neck organs at risk, LL-CNN achieves a prediction accuracy superior to all alternative algorithms.
View Article and Find Full Text PDF

Download full-text PDF

Source
http://dx.doi.org/10.1002/mp.13495DOI Listing
May 2019

DoseNet: a volumetric dose prediction algorithm using 3D fully-convolutional neural networks.

Phys Med Biol 2018 12 4;63(23):235022. Epub 2018 Dec 4.

These two authors contributed equally.

The goal of this study is to demonstrate the feasibility of a novel fully-convolutional volumetric dose prediction neural network (DoseNet) and test its performance on a cohort of prostate stereotactic body radiotherapy (SBRT) patients. DoseNet is suggested as a superior alternative to U-Net and fully connected distance map-based neural networks for non-coplanar SBRT prostate dose prediction. DoseNet utilizes 3D convolutional downsampling with corresponding 3D deconvolutional upsampling to preserve memory while simultaneously increasing the receptive field of the network. DoseNet was implemented on 2 Nvidia 1080 Ti graphics processing units and utilizes a 3 phase learning protocol to help achieve convergence and improve generalization. DoseNet was trained, validated, and tested with 151 patients following Kaggle completion rules. The dosimetric quality of DoseNet was evaluated by comparing the predicted dose distribution with the clinically approved delivered dose distribution in terms of conformity index, heterogeneity index, and various clinically relevant dosimetric parameters. The results indicate that the DoseNet algorithm is a superior alternative to U-Net and fully connected methods for prostate SBRT patients. DoseNet required ~50.1 h to train, and ~0.83 s to make a prediction on a 128  ×  128  ×  64 voxel image. In conclusion, DoseNet is capable of making accurate volumetric dose predictions for non-coplanar SBRT prostate patients, while simultaneously preserving computational efficiency.
View Article and Find Full Text PDF

Download full-text PDF

Source
http://dx.doi.org/10.1088/1361-6560/aaef74DOI Listing
December 2018

An unsupervised convolutional neural network-based algorithm for deformable image registration.

Phys Med Biol 2018 09 17;63(18):185017. Epub 2018 Sep 17.

Department of Radiation Oncology, University of California, San Francisco, CA, United States of America.

The purpose of the work is to develop a deep unsupervised learning strategy for cone-beam CT (CBCT) to CT deformable image registration (DIR). This technique uses a deep convolutional inverse graphics network (DCIGN) based DIR algorithm implemented on 2 Nvidia 1080 Ti graphics processing units. The model is comprised of an encoding and decoding stage. The fully-convolutional encoding stage learns hierarchical features and simultaneously forms an information bottleneck, while the decoding stage restores the original dimensionality of the input image. Activations from the encoding stage are used as the input channels to a sparse DIR algorithm. DCIGN was trained using a distributive learning-based convolutional neural network architecture and used 285 head and neck patients to train, validate, and test the algorithm. The accuracy of the DCIGN algorithm was evaluated on 100 synthetic cases and 12 hold out test patient cases. The results indicate that DCIGN performed better than rigid registration, intensity corrected Demons, and landmark-guided deformable image registration for all evaluation metrics. DCIGN required ~14 h to train, and ~3.5 s to make a prediction on a 512  ×  512  ×  120 voxel image. In conclusion, DCIGN is able to maintain high accuracy in the presence of CBCT noise contamination, while simultaneously preserving high computational efficiency.
View Article and Find Full Text PDF

Download full-text PDF

Source
http://dx.doi.org/10.1088/1361-6560/aada66DOI Listing
September 2018