Publications by authors named "Hannah Deng"

7 Publications

  • Page 1 of 1

A novel incremental simulation of facial changes following orthognathic surgery using FEM with realistic lip sliding effect.

Med Image Anal 2021 May 5;72:102095. Epub 2021 May 5.

Department of Oral and Maxillofacial Surgery, Houston Methodist Research Institute, 6560 Fannin St, Houston, TX 77030, USA; Department of Surgery (Oral and Maxillofacial Surgery), Weill Medical College, Cornell University, 407 E 61st St, New York, NY 10065, USA. Electronic address:

Accurate prediction of facial soft-tissue changes following orthognathic surgery is crucial for surgical outcome improvement. We developed a novel incremental simulation approach using finite element method (FEM) with a realistic lip sliding effect to improve the prediction accuracy in the lip region. First, a lip-detailed mesh is generated based on accurately digitized lip surface points. Second, an improved facial soft-tissue change simulation method is developed by applying a lip sliding effect along with the mucosa sliding effect. Finally, the orthognathic surgery initiated soft-tissue change is simulated incrementally to facilitate a natural transition of the facial change and improve the effectiveness of the sliding effects. Our method was quantitatively validated using 35 retrospective clinical data sets by comparing it to the traditional FEM simulation method and the FEM simulation method with mucosa sliding effect only. The surface deviation error of our method showed significant improvement in the upper and lower lips over the other two prior methods. In addition, the evaluation results using our lip-shape analysis, which reflects clinician's qualitative evaluation, also proved significant improvement of the lip prediction accuracy of our method for the lower lip and both upper and lower lips as a whole compared to the other two methods. In conclusion, the prediction accuracy in the clinically critical region, i.e., the lips, significantly improved after applying incremental simulation with realistic lip sliding effect compared with the FEM simulation methods without the lip sliding effect.
View Article and Find Full Text PDF

Download full-text PDF

Source
http://dx.doi.org/10.1016/j.media.2021.102095DOI Listing
May 2021

Diverse data augmentation for learning image segmentation with cross-modality annotations.

Med Image Anal 2021 Jul 20;71:102060. Epub 2021 Apr 20.

Department of Radiology and Biomedical Research Imaging Center (BRIC), University of North Carolina, Chapel Hill, NC, USA. Electronic address:

The dearth of annotated data is a major hurdle in building reliable image segmentation models. Manual annotation of medical images is tedious, time-consuming, and significantly variable across imaging modalities. The need for annotation can be ameliorated by leveraging an annotation-rich source modality in learning a segmentation model for an annotation-poor target modality. In this paper, we introduce a diverse data augmentation generative adversarial network (DDA-GAN) to train a segmentation model for an unannotated target image domain by borrowing information from an annotated source image domain. This is achieved by generating diverse augmented data for the target domain by one-to-many source-to-target translation. The DDA-GAN uses unpaired images from the source and target domains and is an end-to-end convolutional neural network that (i) explicitly disentangles domain-invariant structural features related to segmentation from domain-specific appearance features, (ii) combines structural features from the source domain with appearance features randomly sampled from the target domain for data augmentation, and (iii) train the segmentation model with the augmented data in the target domain and the annotations from the source domain. The effectiveness of our method is demonstrated both qualitatively and quantitatively in comparison with the state of the art for segmentation of craniomaxillofacial bony structures via MRI and cardiac substructures via CT.
View Article and Find Full Text PDF

Download full-text PDF

Source
http://dx.doi.org/10.1016/j.media.2021.102060DOI Listing
http://www.ncbi.nlm.nih.gov/pmc/articles/PMC8184609PMC
July 2021

Anatomy-Regularized Representation Learning for Cross-Modality Medical Image Segmentation.

IEEE Trans Med Imaging 2021 01 29;40(1):274-285. Epub 2020 Dec 29.

An increasing number of studies are leveraging unsupervised cross-modality synthesis to mitigate the limited label problem in training medical image segmentation models. They typically transfer ground truth annotations from a label-rich imaging modality to a label-lacking imaging modality, under an assumption that different modalities share the same anatomical structure information. However, since these methods commonly use voxel/pixel-wise cycle-consistency to regularize the mappings between modalities, high-level semantic information is not necessarily preserved. In this paper, we propose a novel anatomy-regularized representation learning approach for segmentation-oriented cross-modality image synthesis. It learns a common feature encoding across different modalities to form a shared latent space, where 1) the input and its synthesis present consistent anatomical structure information, and 2) the transformation between two images in one domain is preserved by their syntheses in another domain. We applied our method to the tasks of cross-modality skull segmentation and cardiac substructure segmentation. Experimental results demonstrate the superiority of our method in comparison with state-of-the-art cross-modality medical image segmentation methods.
View Article and Find Full Text PDF

Download full-text PDF

Source
http://dx.doi.org/10.1109/TMI.2020.3025133DOI Listing
http://www.ncbi.nlm.nih.gov/pmc/articles/PMC8120796PMC
January 2021

Estimating Reference Shape Model for Personalized Surgical Reconstruction of Craniomaxillofacial Defects.

IEEE Trans Biomed Eng 2021 Feb 20;68(2):362-373. Epub 2021 Jan 20.

Objective: To estimate a patient-specific reference bone shape model for a patient with craniomaxillofacial (CMF) defects due to facial trauma.

Methods: We proposed an automatic facial bone shape estimation framework using pre-traumatic conventional portrait photos and post-traumatic head computed tomography (CT) scans via a 3D face reconstruction and a deformable shape model. Specifically, a three-dimensional (3D) face was first reconstructed from the patient's pre-traumatic portrait photos. Second, a correlation model between the skin and bone surfaces was constructed using a sparse representation based on the CT images of training normal subjects. Third, by feeding the reconstructed 3D face into the correlation model, an initial reference shape model was generated. In addition, we refined the initial estimation by applying non-rigid surface matching between the initially estimated shape and the patient's post-traumatic bone based on the adaptive-focus deformable shape model (AFDSM). Furthermore, a statistical shape model, built from the training normal subjects, was utilized to constrain the deformation process to avoid overfitting.

Results And Conclusion: The proposed method was evaluated using both synthetic and real patient data. Experimental results show that the patient's abnormal facial bony structure can be recovered using our method, and the estimated reference shape model is considered clinically acceptable by an experienced CMF surgeon.

Significance: The proposed method is more suitable to the complex CMF defects for CMF reconstructive surgical planning.
View Article and Find Full Text PDF

Download full-text PDF

Source
http://dx.doi.org/10.1109/TBME.2020.2990586DOI Listing
http://www.ncbi.nlm.nih.gov/pmc/articles/PMC8163108PMC
February 2021

Estimating Reference Bony Shape Model for Personalized Surgical Reconstruction of Posttraumatic Facial Defects.

Med Image Comput Comput Assist Interv 2019 Oct 10;11768:327-335. Epub 2019 Oct 10.

BRIC and Department of Radiology, University of North Carolina at Chapel Hill, Chapel Hill, USA.

In this paper, we introduce a method for estimating patient-specific reference bony shape models for planning of reconstructive surgery for patients with acquired craniomaxillofacial (CMF) trauma. We propose an automatic bony shape estimation framework using pre-traumatic portrait photographs and post-traumatic head computed tomography (CT) scans. A 3D facial surface is first reconstructed from the patient's pre-traumatic photographs. An initial estimation of the patient's normal bony shape is then obtained with the reconstructed facial surface via sparse representation using a dictionary of paired facial and bony surfaces of normal subjects. We further refine the bony shape model by deforming the initial bony shape model to the post-traumatic 3D CT bony model, regularized by a statistical shape model built from a database of normal subjects. Experimental results show that our method is capable of effectively recovering the patient's normal facial bony shape in regions with defects, allowing CMF surgical planning to be performed precisely for a wider range of defects caused by trauma.
View Article and Find Full Text PDF

Download full-text PDF

Source
http://dx.doi.org/10.1007/978-3-030-32254-0_37DOI Listing
http://www.ncbi.nlm.nih.gov/pmc/articles/PMC6910247PMC
October 2019

One-Shot Generative Adversarial Learning for MRI Segmentation of Craniomaxillofacial Bony Structures.

IEEE Trans Med Imaging 2020 03 14;39(3):787-796. Epub 2019 Aug 14.

Compared to computed tomography (CT), magnetic resonance imaging (MRI) delineation of craniomaxillofacial (CMF) bony structures can avoid harmful radiation exposure. However, bony boundaries are blurry in MRI, and structural information needs to be borrowed from CT during the training. This is challenging since paired MRI-CT data are typically scarce. In this paper, we propose to make full use of unpaired data, which are typically abundant, along with a single paired MRI-CT data to construct a one-shot generative adversarial model for automated MRI segmentation of CMF bony structures. Our model consists of a cross-modality image synthesis sub-network, which learns the mapping between CT and MRI, and an MRI segmentation sub-network. These two sub-networks are trained jointly in an end-to-end manner. Moreover, in the training phase, a neighbor-based anchoring method is proposed to reduce the ambiguity problem inherent in cross-modality synthesis, and a feature-matching-based semantic consistency constraint is proposed to encourage segmentation-oriented MRI synthesis. Experimental results demonstrate the superiority of our method both qualitatively and quantitatively in comparison with the state-of-the-art MRI segmentation methods.
View Article and Find Full Text PDF

Download full-text PDF

Source
http://dx.doi.org/10.1109/TMI.2019.2935409DOI Listing
http://www.ncbi.nlm.nih.gov/pmc/articles/PMC7219540PMC
March 2020

Craniomaxillofacial Bony Structures Segmentation from MRI with Deep-Supervision Adversarial Learning.

Med Image Comput Comput Assist Interv 2018 Sep 13;11073:720-727. Epub 2018 Sep 13.

Department of Radiology and BRIC, University of North Carolina at Chapel Hill, Chapel Hill, USA.

Automatic segmentation of medical images finds abundant applications in clinical studies. Computed Tomography (CT) imaging plays a critical role in diagnostic and surgical planning of craniomaxillofacial (CMF) surgeries as it shows clear bony structures. However, CT imaging poses radiation risks for the subjects being scanned. Alternatively, Magnetic Resonance Imaging (MRI) is considered to be safe and provides good visualization of the soft tissues, but the bony structures appear invisible from MRI. Therefore, the segmentation of bony structures from MRI is quite challenging. In this paper, we propose a cascaded generative adversarial network with deep-supervision discriminator (Deep-supGAN) for automatic bony structures segmentation. The first block in this architecture is used to generate a high-quality CT image from an MRI, and the second block is used to segment bony structures from MRI and the generated CT image. Different from traditional discriminators, the deep-supervision discriminator distinguishes the generated CT from the ground-truth at different levels of feature maps. For segmentation, the loss is concentrated on the voxel level on the higher abstract perceptual levels. Experimental results show that the proposed method generates CT images with clearer structural details and also segments the bony structures more accurately compared with the state-of-the-art methods.
View Article and Find Full Text PDF

Download full-text PDF

Source
http://dx.doi.org/10.1007/978-3-030-00937-3_82DOI Listing
http://www.ncbi.nlm.nih.gov/pmc/articles/PMC6235451PMC
September 2018