Publications by authors named "Pew-Thian Yap"

190 Publications

Brainwide functional networks associated with anatomically- and functionally-defined hippocampal subfields using ultrahigh-resolution fMRI.

Sci Rep 2021 May 25;11(1):10835. Epub 2021 May 25.

Biomedical Research Imaging Center, University of North Carolina at Chapel Hill, Chapel Hill, NC, USA.

The hippocampus is critical for learning and memory and may be separated into anatomically-defined hippocampal subfields (aHPSFs). Hippocampal functional networks, particularly during resting state, are generally analyzed using aHPSFs as seed regions, with the underlying assumption that the function within a subfield is homogeneous, yet heterogeneous between subfields. However, several prior studies have observed similar resting-state functional connectivity (FC) profiles between aHPSFs. Alternatively, data-driven approaches investigate hippocampal functional organization without a priori assumptions. However, insufficient spatial resolution may result in a number of caveats concerning the reliability of the results. Hence, we developed a functional Magnetic Resonance Imaging (fMRI) sequence on a 7 T MR scanner achieving 0.94 mm isotropic resolution with a TR of 2 s and brain-wide coverage to (1) investigate the functional organization within hippocampus at rest, and (2) compare the brain-wide FC associated with fine-grained aHPSFs and functionally-defined hippocampal subfields (fHPSFs). This study showed that fHPSFs were arranged along the longitudinal axis that were not comparable to the lamellar structures of aHPSFs. For brain-wide FC, the fHPSFs rather than aHPSFs revealed that a number of fHPSFs connected specifically with some of the functional networks. Different functional networks also showed preferential connections with different portions of hippocampal subfields.
View Article and Find Full Text PDF

Download full-text PDF

Source
http://dx.doi.org/10.1038/s41598-021-90364-7DOI Listing
http://www.ncbi.nlm.nih.gov/pmc/articles/PMC8149395PMC
May 2021

Diverse data augmentation for learning image segmentation with cross-modality annotations.

Med Image Anal 2021 Jul 20;71:102060. Epub 2021 Apr 20.

Department of Radiology and Biomedical Research Imaging Center (BRIC), University of North Carolina, Chapel Hill, NC, USA. Electronic address:

The dearth of annotated data is a major hurdle in building reliable image segmentation models. Manual annotation of medical images is tedious, time-consuming, and significantly variable across imaging modalities. The need for annotation can be ameliorated by leveraging an annotation-rich source modality in learning a segmentation model for an annotation-poor target modality. In this paper, we introduce a diverse data augmentation generative adversarial network (DDA-GAN) to train a segmentation model for an unannotated target image domain by borrowing information from an annotated source image domain. This is achieved by generating diverse augmented data for the target domain by one-to-many source-to-target translation. The DDA-GAN uses unpaired images from the source and target domains and is an end-to-end convolutional neural network that (i) explicitly disentangles domain-invariant structural features related to segmentation from domain-specific appearance features, (ii) combines structural features from the source domain with appearance features randomly sampled from the target domain for data augmentation, and (iii) train the segmentation model with the augmented data in the target domain and the annotations from the source domain. The effectiveness of our method is demonstrated both qualitatively and quantitatively in comparison with the state of the art for segmentation of craniomaxillofacial bony structures via MRI and cardiac substructures via CT.
View Article and Find Full Text PDF

Download full-text PDF

Source
http://dx.doi.org/10.1016/j.media.2021.102060DOI Listing
http://www.ncbi.nlm.nih.gov/pmc/articles/PMC8184609PMC
July 2021

Multi-site MRI harmonization via attention-guided deep domain adaptation for brain disorder identification.

Med Image Anal 2021 Jul 20;71:102076. Epub 2021 Apr 20.

Department of Radiology and Biomedical Research Imaging Center, University of North Carolina at Chapel Hill, Chapel Hill, NC 27599, USA. Electronic address:

Structural magnetic resonance imaging (MRI) has shown great clinical and practical values in computer-aided brain disorder identification. Multi-site MRI data increase sample size and statistical power, but are susceptible to inter-site heterogeneity caused by different scanners, scanning protocols, and subject cohorts. Multi-site MRI harmonization (MMH) helps alleviate the inter-site difference for subsequent analysis. Some MMH methods performed at imaging level or feature extraction level are concise but lack robustness and flexibility to some extent. Even though several machine/deep learning-based methods have been proposed for MMH, some of them require a portion of labeled data in the to-be-analyzed target domain or ignore the potential contributions of different brain regions to the identification of brain disorders. In this work, we propose an attention-guided deep domain adaptation (ADA) framework for MMH and apply it to automated brain disorder identification with multi-site MRIs. The proposed framework does not need any category label information of target data, and can also automatically identify discriminative regions in whole-brain MR images. Specifically, the proposed ADA is composed of three key modules: (1) an MRI feature encoding module to extract representations of input MRIs, (2) an attention discovery module to automatically locate discriminative dementia-related regions in each whole-brain MRI scan, and (3) a domain transfer module trained with adversarial learning for knowledge transfer between the source and target domains. Experiments have been performed on 2572 subjects from four benchmark datasets with T1-weighted structural MRIs, with results demonstrating the effectiveness of the proposed method in both tasks of brain disorder identification and disease progression prediction.
View Article and Find Full Text PDF

Download full-text PDF

Source
http://dx.doi.org/10.1016/j.media.2021.102076DOI Listing
http://www.ncbi.nlm.nih.gov/pmc/articles/PMC8184627PMC
July 2021

Dilated perivascular space is related to reduced free-water in surrounding white matter among healthy adults and elderlies but not in patients with severe cerebral small vessel disease.

J Cereb Blood Flow Metab 2021 Apr 4:271678X211005875. Epub 2021 Apr 4.

Department of Radiology, School of Medicine, Second Affiliated Hospital of Zhejiang University, Zhejiang, China.

Perivascular space facilitates cerebral interstitial water clearance. However, it is unclear how dilated perivascular space (dPVS) affects the interstitial water of surrounding white matter. We aimed to determine the presence and extent of changes in normal-appearing white matter water components around dPVS in different populations. Twenty healthy elderly subjects and 15 elderly subjects with severe cerebral small vessel disease (CSVD, with lacunar infarction 6 months before the scan) were included in our study. And other 28 healthy adult subjects were enrolled under a different scanning parameter to see if the results are comparable. The normal-appearing white matter around dPVS was categorized into 10 layers (1 mm thickness each) based on their distance to dPVS. We evaluated the mean isotropic-diffusing water volume fraction in each layer. We discovered a significantly reduced free-water content in the layers closely adjacent to the dPVS in the healthy elderlies. however, this reduction around dPVS was weaker in the CSVD subjects. We also discovered an elevated free-water content within dPVS. DPVS played different roles in healthy subjects or CSVD subjects. The reduced water content around dPVS in healthy subjects suggests these MR-visible PVSs are not always related to the stagnation of fluid.
View Article and Find Full Text PDF

Download full-text PDF

Source
http://dx.doi.org/10.1177/0271678X211005875DOI Listing
April 2021

Multi-task learning for segmentation and classification of tumors in 3D automated breast ultrasound images.

Med Image Anal 2021 05 28;70:101918. Epub 2020 Nov 28.

School of Biomedical Engineering, ShanghaiTech University, Shanghai, China; Department of Artificial Intelligence, Korea University, Seoul 02841, Republic of Korea. Electronic address:

Tumor classification and segmentation are two important tasks for computer-aided diagnosis (CAD) using 3D automated breast ultrasound (ABUS) images. However, they are challenging due to the significant shape variation of breast tumors and the fuzzy nature of ultrasound images (e.g., low contrast and signal to noise ratio). Considering the correlation between tumor classification and segmentation, we argue that learning these two tasks jointly is able to improve the outcomes of both tasks. In this paper, we propose a novel multi-task learning framework for joint segmentation and classification of tumors in ABUS images. The proposed framework consists of two sub-networks: an encoder-decoder network for segmentation and a light-weight multi-scale network for classification. To account for the fuzzy boundaries of tumors in ABUS images, our framework uses an iterative training strategy to refine feature maps with the help of probability maps obtained from previous iterations. Experimental results based on a clinical dataset of 170 3D ABUS volumes collected from 107 patients indicate that the proposed multi-task framework improves tumor segmentation and classification over the single-task learning counterparts.
View Article and Find Full Text PDF

Download full-text PDF

Source
http://dx.doi.org/10.1016/j.media.2020.101918DOI Listing
May 2021

Estimating Reference Bony Shape Models for Orthognathic Surgical Planning Using 3D Point-Cloud Deep Learning.

IEEE J Biomed Health Inform 2021 Jan 26;PP. Epub 2021 Jan 26.

Orthognathic surgical outcomes rely heavily on the quality of surgical planning. Automatic estimation of a reference facial bone shape significantly reduces experience-dependent variability and improves planning accuracy and efficiency. We propose an end-to-end deep learning framework to estimate patient-specific reference bony shape models for patients with orthognathic deformities. Specifically, we apply a point-cloud network to learn a vertex-wise deformation field from a patients deformed bony shape, represented as a point cloud. The estimated deformation field is then used to correct the deformed bony shape to output a patient-specific reference bony surface model. To train our network effectively, we introduce a simulation strategy to synthesize deformed bones from any given normal bone, producing a relatively large and diverse dataset of shapes for training. Our method was evaluated using both synthetic and real patient data. Experimental results show that our framework estimates realistic reference bony shape models for patients with varying deformities. The performance of our method is consistently better than an existing method and several deep point-cloud networks. Our end-to-end estimation framework based on geometric deep learning shows great potential for improving clinical workflows.
View Article and Find Full Text PDF

Download full-text PDF

Source
http://dx.doi.org/10.1109/JBHI.2021.3054494DOI Listing
January 2021

A Mutual Multi-Scale Triplet Graph Convolutional Network for Classification of Brain Disorders Using Functional or Structural Connectivity.

IEEE Trans Med Imaging 2021 Apr 1;40(4):1279-1289. Epub 2021 Apr 1.

Brain connectivity alterations associated with mental disorders have been widely reported in both functional MRI (fMRI) and diffusion MRI (dMRI). However, extracting useful information from the vast amount of information afforded by brain networks remains a great challenge. Capturing network topology, graph convolutional networks (GCNs) have demonstrated to be superior in learning network representations tailored for identifying specific brain disorders. Existing graph construction techniques generally rely on a specific brain parcellation to define regions-of-interest (ROIs) to construct networks, often limiting the analysis into a single spatial scale. In addition, most methods focus on the pairwise relationships between the ROIs and ignore high-order associations between subjects. In this letter, we propose a mutual multi-scale triplet graph convolutional network (MMTGCN) to analyze functional and structural connectivity for brain disorder diagnosis. We first employ several templates with different scales of ROI parcellation to construct coarse-to-fine brain connectivity networks for each subject. Then, a triplet GCN (TGCN) module is developed to learn functional/structural representations of brain connectivity networks at each scale, with the triplet relationship among subjects explicitly incorporated into the learning process. Finally, we propose a template mutual learning strategy to train different scale TGCNs collaboratively for disease classification. Experimental results on 1,160 subjects from three datasets with fMRI or dMRI data demonstrate that our MMTGCN outperforms several state-of-the-art methods in identifying three types of brain disorders.
View Article and Find Full Text PDF

Download full-text PDF

Source
http://dx.doi.org/10.1109/TMI.2021.3051604DOI Listing
April 2021

Gaussianization of Diffusion MRI Data Using Spatially Adaptive Filtering.

Med Image Anal 2021 02 17;68:101828. Epub 2020 Oct 17.

Department of Radiology and Biomedical Research Imaging Center (BRIC), University of North Carolina at Chapel Hill, NC, U.S.A.. Electronic address:

Diffusion MRI magnitude data, typically Rician or noncentral χ distributed, is affected by the noise floor, which falsely elevates signal, reduces image contrast, and biases estimation of diffusion parameters. Noise floor can be avoided by extracting real-valued Gaussian-distributed data from complex diffusion-weighted images via phase correction, which is performed by rotating each complex diffusion-weighted image based on its phase so that the actual image content resides in the real part. The imaginary part can then be discarded, leaving only the real part to form a Gaussian-noise image that is not confounded by the noise floor. The effectiveness of phase correction depends on the estimation of the background phase associated with factors such as brain motion, cardiac pulsation, perfusion, and respiration. Most existing smoothing techniques, applied to the real and imaginary images for phase estimation, assume spatially-stationary noise. This assumption does not necessarily hold in real data. In this paper, we introduce an adaptive filtering approach, called multi-kernel filter (MKF), for image smoothing catering to spatially-varying noise. Inspired by the mechanisms of human vision, MKF employs a bilateral filter with spatially-varying kernels. Extensive experiments demonstrate that MKF significantly improves spatial adaptivity and outperforms various state-of-the-art filters in signal Gaussianization.
View Article and Find Full Text PDF

Download full-text PDF

Source
http://dx.doi.org/10.1016/j.media.2020.101828DOI Listing
http://www.ncbi.nlm.nih.gov/pmc/articles/PMC7855815PMC
February 2021

Difficulty-aware hierarchical convolutional neural networks for deformable registration of brain MR images.

Med Image Anal 2021 01 30;67:101817. Epub 2020 Sep 30.

Department of Radiology and Biomedical Research Imaging Center (BRIC), University of North Carolina, Chapel Hill, USA. Electronic address:

The aim of deformable brain image registration is to align anatomical structures, which can potentially vary with large and complex deformations. Anatomical structures vary in size and shape, requiring the registration algorithm to estimate deformation fields at various degrees of complexity. Here, we present a difficulty-aware model based on an attention mechanism to automatically identify hard-to-register regions, allowing better estimation of large complex deformations. The difficulty-aware model is incorporated into a cascaded neural network consisting of three sub-networks to fully leverage both global and local contextual information for effective registration. The first sub-network is trained at the image level to predict a coarse-scale deformation field, which is then used for initializing the subsequent sub-network. The next two sub-networks progressively optimize at the patch level with different resolutions to predict a fine-scale deformation field. Embedding difficulty-aware learning into the hierarchical neural network allows harder patches to be identified in the deeper sub-networks at higher resolutions for refining the deformation field. Experiments conducted on four public datasets validate that our method achieves promising registration accuracy with better preservation of topology, compared with state-of-the-art registration methods.
View Article and Find Full Text PDF

Download full-text PDF

Source
http://dx.doi.org/10.1016/j.media.2020.101817DOI Listing
http://www.ncbi.nlm.nih.gov/pmc/articles/PMC7725910PMC
January 2021

Dynamic neural circuit disruptions associated with antisocial behaviors.

Hum Brain Mapp 2021 Feb 16;42(2):329-344. Epub 2020 Oct 16.

Department of Radiology and BRIC, University of North Carolina at Chapel Hill, Chapel Hill, North Carolina, USA.

Antisocial behavior (ASB) is believed to have neural substrates; however, the association between ASB and functional brain networks remains unclear. The temporal variability of the functional connectivity (or dynamic FC) derived from resting-state functional MRI has been suggested as a useful metric for studying abnormal behaviors including ASB. This is the first study using low-frequency fluctuations of the dynamic FC to unravel potential system-level neural correlates with ASB. Specifically, we individually associated the dynamic FC patterns with the ASB scores (measured by Antisocial Process Screening Device) of the male offenders (age: 23.29 ± 3.36 years) based on machine learning. Results showed that the dynamic FCs were associated with individual ASB scores. Moreover, we found that it was mainly the inter-network dynamic FCs that were negatively associated with the ASB severity. Three major high-order cognitive functional networks and the sensorimotor network were found to be more associated with ASB. We further found that impaired behavior in the ASB subjects was mainly associated with decreased FC dynamics in these networks, which may explain why ASB subjects usually have impaired executive control and emotional processing functions. Our study shows that temporal variation of the FC could be a promising tool for ASB assessment, treatment, and prevention.
View Article and Find Full Text PDF

Download full-text PDF

Source
http://dx.doi.org/10.1002/hbm.25225DOI Listing
http://www.ncbi.nlm.nih.gov/pmc/articles/PMC7776000PMC
February 2021

Deep Bayesian Hashing With Center Prior for Multi-Modal Neuroimage Retrieval.

IEEE Trans Med Imaging 2021 Feb 2;40(2):503-513. Epub 2021 Feb 2.

Multi-modal neuroimage retrieval has greatly facilitated the efficiency and accuracy of decision making in clinical practice by providing physicians with previous cases (with visually similar neuroimages) and corresponding treatment records. However, existing methods for image retrieval usually fail when applied directly to multi-modal neuroimage databases, since neuroimages generally have smaller inter-class variation and larger inter-modal discrepancy compared to natural images. To this end, we propose a deep Bayesian hash learning framework, called CenterHash, which can map multi-modal data into a shared Hamming space and learn discriminative hash codes from imbalanced multi-modal neuroimages. The key idea to tackle the small inter-class variation and large inter-modal discrepancy is to learn a common center representation for similar neuroimages from different modalities and encourage hash codes to be explicitly close to their corresponding center representations. Specifically, we measure the similarity between hash codes and their corresponding center representations and treat it as a center prior in the proposed Bayesian learning framework. A weighted contrastive likelihood loss function is also developed to facilitate hash learning from imbalanced neuroimage pairs. Comprehensive empirical evidence shows that our method can generate effective hash codes and yield state-of-the-art performance in cross-modal retrieval on three multi-modal neuroimage datasets.
View Article and Find Full Text PDF

Download full-text PDF

Source
http://dx.doi.org/10.1109/TMI.2020.3030752DOI Listing
http://www.ncbi.nlm.nih.gov/pmc/articles/PMC7909752PMC
February 2021

Anatomy-Regularized Representation Learning for Cross-Modality Medical Image Segmentation.

IEEE Trans Med Imaging 2021 01 29;40(1):274-285. Epub 2020 Dec 29.

An increasing number of studies are leveraging unsupervised cross-modality synthesis to mitigate the limited label problem in training medical image segmentation models. They typically transfer ground truth annotations from a label-rich imaging modality to a label-lacking imaging modality, under an assumption that different modalities share the same anatomical structure information. However, since these methods commonly use voxel/pixel-wise cycle-consistency to regularize the mappings between modalities, high-level semantic information is not necessarily preserved. In this paper, we propose a novel anatomy-regularized representation learning approach for segmentation-oriented cross-modality image synthesis. It learns a common feature encoding across different modalities to form a shared latent space, where 1) the input and its synthesis present consistent anatomical structure information, and 2) the transformation between two images in one domain is preserved by their syntheses in another domain. We applied our method to the tasks of cross-modality skull segmentation and cardiac substructure segmentation. Experimental results demonstrate the superiority of our method in comparison with state-of-the-art cross-modality medical image segmentation methods.
View Article and Find Full Text PDF

Download full-text PDF

Source
http://dx.doi.org/10.1109/TMI.2020.3025133DOI Listing
http://www.ncbi.nlm.nih.gov/pmc/articles/PMC8120796PMC
January 2021

Hierarchical Nonlocal Residual Networks for Image Quality Assessment of Pediatric Diffusion MRI With Limited and Noisy Annotations.

IEEE Trans Med Imaging 2020 11 28;39(11):3691-3702. Epub 2020 Oct 28.

Fast and automated image quality assessment (IQA) of diffusion MR images is crucial for making timely decisions for rescans. However, learning a model for this task is challenging as the number of annotated data is limited and the annotation labels might not always be correct. As a remedy, we will introduce in this paper an automatic image quality assessment (IQA) method based on hierarchical non-local residual networks for pediatric diffusion MR images. Our IQA is performed in three sequential stages, i.e., 1) slice-wise IQA, where a nonlocal residual network is first pre-trained to annotate each slice with an initial quality rating (i.e., pass/questionable/fail), which is subsequently refined via iterative semi-supervised learning and slice self-training; 2) volume-wise IQA, which agglomerates the features extracted from the slices of a volume, and uses a nonlocal network to annotate the quality rating for each volume via iterative volume self-training; and 3) subject-wise IQA, which ensembles the volumetric IQA results to determine the overall image quality pertaining to a subject. Experimental results demonstrate that our method, trained using only samples of modest size, exhibits great generalizability, and is capable of conducting rapid hierarchical IQA with near-perfect accuracy.
View Article and Find Full Text PDF

Download full-text PDF

Source
http://dx.doi.org/10.1109/TMI.2020.3002708DOI Listing
http://www.ncbi.nlm.nih.gov/pmc/articles/PMC7606371PMC
November 2020

Probing Tissue Microarchitecture of the Baby Brain via Spherical Mean Spectrum Imaging.

IEEE Trans Med Imaging 2020 11 28;39(11):3607-3618. Epub 2020 Oct 28.

During the first years of life, the human brain undergoes dynamic spatially-heterogeneous changes, invo- lving differentiation of neuronal types, dendritic arbori- zation, axonal ingrowth, outgrowth and retraction, synaptogenesis, and myelination. To better quantify these changes, this article presents a method for probing tissue microarchitecture by characterizing water diffusion in a spectrum of length scales, factoring out the effects of intra-voxel orientation heterogeneity. Our method is based on the spherical means of the diffusion signal, computed over gradient directions for a set of diffusion weightings (i.e., b -values). We decompose the spherical mean profile at each voxel into a spherical mean spectrum (SMS), which essentially encodes the fractions of spin packets undergoing fine- to coarse-scale diffusion proce- sses, characterizing restricted and hindered diffusion stemming respectively from intra- and extra-cellular water compartments. From the SMS, multiple orientation distribution invariant indices can be computed, allowing for example the quantification of neurite density, microscopic fractional anisotropy ( μ FA), per-axon axial/radial diffusivity, and free/restricted isotropic diffusivity. We show that these indices can be computed for the developing brain for greater sensitivity and specificity to development related changes in tissue microstructure. Also, we demonstrate that our method, called spherical mean spectrum imaging (SMSI), is fast, accurate, and can overcome the biases associated with other state-of-the-art microstructure models.
View Article and Find Full Text PDF

Download full-text PDF

Source
http://dx.doi.org/10.1109/TMI.2020.3001175DOI Listing
http://www.ncbi.nlm.nih.gov/pmc/articles/PMC7688284PMC
November 2020

SLIR: Synthesis, localization, inpainting, and registration for image-guided thermal ablation of liver tumors.

Med Image Anal 2020 10 25;65:101763. Epub 2020 Jun 25.

School of Biomedical Engineering, Shanghai Jiao Tong University, Shanghai 200030, China. Electronic address:

Thermal ablation is a minimally invasive procedure for treating small or unresectable tumors. Although CT is widely used for guiding ablation procedures, yet the contrast of tumors against normal soft tissues is often poor in CT scans, aggravating the accurate thermal ablation. In this paper, we propose a fast MR-CT image registration method to overlay pre-procedural MR (pMR) and pre-procedural CT (pCT) images onto an intra-procedural CT (iCT) image to guide the thermal ablation of liver tumors. At the pre-procedural stage, the Cycle-GAN model with mutual information constraint is employed to generate the synthesized CT (sCT) image from the input pMR. Then, pMR-pCT image registration is carried out via traditional mono-modal sCT-pCT image registration. At the intra-procedural stage, the region of the probe and its artifacts are automatically localized and inpainted in the iCT image. Then, an unsupervised registration network (UR-Net) is used to efficiently align the pCT with the inpainted iCT (inpCT) image. The final transform from pMR to iCT is obtained by concatenating the two estimated transforms, i.e., (i) from pMR image space to pCT image space (via sCT) and (ii) from pCT image space to iCT image space (via inpCT). The proposed method has been evaluated over a real clinical dataset and compared with state-of-the-art methods. Experimental results confirm that the proposed method achieves high registration accuracy with fast computation speed.
View Article and Find Full Text PDF

Download full-text PDF

Source
http://dx.doi.org/10.1016/j.media.2020.101763DOI Listing
October 2020

Real-Time Quality Assessment of Pediatric MRI via Semi-Supervised Deep Nonlocal Residual Neural Networks.

IEEE Trans Image Process 2020 May 8. Epub 2020 May 8.

In this paper, we introduce an image quality assessment (IQA) method for pediatric T1- and T2-weighted MR images. IQA is first performed slice-wise using a nonlocal residual neural network (NR-Net) and then volume-wise by agglomerating the slice QA results using random forest. Our method requires only a small amount of quality-annotated images for training and is designed to be robust to annotation noise that might occur due to rater errors and the inevitable mix of good and bad slices in an image volume. Using a small set of quality-assessed images, we pre-train NR-Net to annotate each image slice with an initial quality rating (i.e., pass, questionable, fail), which we then refine by semi-supervised learning and iterative self-training. Experimental results demonstrate that our method, trained using only samples of modest size, exhibit great generalizability, capable of real-time (milliseconds per volume) large-scale IQA with nearperfect accuracy.
View Article and Find Full Text PDF

Download full-text PDF

Source
http://dx.doi.org/10.1109/TIP.2020.2992079DOI Listing
http://www.ncbi.nlm.nih.gov/pmc/articles/PMC7648726PMC
May 2020

Estimating Reference Shape Model for Personalized Surgical Reconstruction of Craniomaxillofacial Defects.

IEEE Trans Biomed Eng 2021 Feb 20;68(2):362-373. Epub 2021 Jan 20.

Objective: To estimate a patient-specific reference bone shape model for a patient with craniomaxillofacial (CMF) defects due to facial trauma.

Methods: We proposed an automatic facial bone shape estimation framework using pre-traumatic conventional portrait photos and post-traumatic head computed tomography (CT) scans via a 3D face reconstruction and a deformable shape model. Specifically, a three-dimensional (3D) face was first reconstructed from the patient's pre-traumatic portrait photos. Second, a correlation model between the skin and bone surfaces was constructed using a sparse representation based on the CT images of training normal subjects. Third, by feeding the reconstructed 3D face into the correlation model, an initial reference shape model was generated. In addition, we refined the initial estimation by applying non-rigid surface matching between the initially estimated shape and the patient's post-traumatic bone based on the adaptive-focus deformable shape model (AFDSM). Furthermore, a statistical shape model, built from the training normal subjects, was utilized to constrain the deformation process to avoid overfitting.

Results And Conclusion: The proposed method was evaluated using both synthetic and real patient data. Experimental results show that the patient's abnormal facial bony structure can be recovered using our method, and the estimated reference shape model is considered clinically acceptable by an experienced CMF surgeon.

Significance: The proposed method is more suitable to the complex CMF defects for CMF reconstructive surgical planning.
View Article and Find Full Text PDF

Download full-text PDF

Source
http://dx.doi.org/10.1109/TBME.2020.2990586DOI Listing
http://www.ncbi.nlm.nih.gov/pmc/articles/PMC8163108PMC
February 2021

A toolbox for brain network construction and classification (BrainNetClass).

Hum Brain Mapp 2020 07 12;41(10):2808-2826. Epub 2020 Mar 12.

Department of Radiology and BRIC, University of North Carolina at Chapel Hill, Chapel Hill, North Carolina, USA.

Brain functional network has been increasingly used in understanding brain functions and diseases. While many network construction methods have been proposed, the progress in the field still largely relies on static pairwise Pearson's correlation-based functional network and group-level comparisons. We introduce a "Brain Network Construction and Classification (BrainNetClass)" toolbox to promote more advanced brain network construction methods to the filed, including some state-of-the-art methods that were recently developed to capture complex and high-order interactions among brain regions. The toolbox also integrates a well-accepted and rigorous classification framework based on brain connectome features toward individualized disease diagnosis in a hope that the advanced network modeling could boost the subsequent classification. BrainNetClass is a MATLAB-based, open-source, cross-platform toolbox with both graphical user-friendly interfaces and a command line mode targeting cognitive neuroscientists and clinicians for promoting reliability, reproducibility, and interpretability of connectome-based, computer-aided diagnosis. It generates abundant classification-related results from network presentations to contributing features that have been largely ignored by most studies to grant users the ability of evaluating the disease diagnostic model and its robustness and generalizability. We demonstrate the effectiveness of the toolbox on real resting-state functional MRI datasets. BrainNetClass (v1.0) is available at https://github.com/zzstefan/BrainNetClass.
View Article and Find Full Text PDF

Download full-text PDF

Source
http://dx.doi.org/10.1002/hbm.24979DOI Listing
http://www.ncbi.nlm.nih.gov/pmc/articles/PMC7294070PMC
July 2020

Wavelet-based Semi-supervised Adversarial Learning for Synthesizing Realistic 7T from 3T MRI.

Med Image Comput Comput Assist Interv 2019 Oct 10;11767:786-794. Epub 2019 Oct 10.

Department of Radiology and BRIC, University of North Carolina at Chapel Hill, Chapel Hill, NC 27599, USA.

Ultra-high field 7T magnetic resonance imaging (MRI) scanners produce images with exceptional anatomical details, which can facilitate diagnosis and prognosis. However, 7T MRI scanners are often cost prohibitive and hence inaccessible. In this paper, we propose a novel wavelet-based semi-supervised adversarial learning framework to synthesize 7T MR images from their 3T counterparts. Unlike most learning methods that rely on supervision requiring a significant amount of 3T-7T paired data, our method applies a semi-supervised learning mechanism to leverage unpaired 3T and 7T MR images to learn the 3T-to-7T mapping when 3T-7T paired data are scarce. This is achieved via a cycle generative adversarial network that operates in the joint spatial-wavelet domain for the synthesis of multi-frequency details. Extensive experimental results show that our method achieves better performance than state-of-the-art methods trained using fully paired data.
View Article and Find Full Text PDF

Download full-text PDF

Source
http://dx.doi.org/10.1007/978-3-030-32251-9_86DOI Listing
http://www.ncbi.nlm.nih.gov/pmc/articles/PMC7065678PMC
October 2019

Reconstructing High-Quality Diffusion MRI Data from Orthogonal Slice-Undersampled Data Using Graph Convolutional Neural Networks.

Med Image Comput Comput Assist Interv 2019 Oct 10;11766:529-537. Epub 2019 Oct 10.

Department of Radiology and BRIC, University of North Carolina at Chapel Hill, Chapel Hill, NC, USA.

Diffusion MRI (dMRI), while powerful for the characterization of tissue microstructure, suffers from long acquisition times. In this paper, we propose a super-resolution (SR) reconstruction method based on orthogonal slice-undersampling for accelerated dMRI acquisition. Instead of scanning full diffusion-weighted (DW) image volumes, only a subsample of equally-spaced slices need to be acquired. We show that complementary information from DW volumes corresponding to different diffusion wave-vectors can be harnessed using graph convolutional neural networks for reconstruction of the full DW volumes. We demonstrate that our SR reconstruction method outperforms typical interpolation methods and mitigates partial volume effects. Experimental results indicate that acceleration up to a factor of 5 can be achieved with minimal information loss.
View Article and Find Full Text PDF

Download full-text PDF

Source
http://dx.doi.org/10.1007/978-3-030-32248-9_59DOI Listing
http://www.ncbi.nlm.nih.gov/pmc/articles/PMC7065676PMC
October 2019

Multifold Acceleration of Diffusion MRI via Deep Learning Reconstruction from Slice-Undersampled Data.

Inf Process Med Imaging 2019 Jun 22;11492:530-541. Epub 2019 May 22.

Department of Radiology and BRIC, University of North Carolina at Chapel Hill, Chapel Hill, NC, USA.

Diffusion MRI (dMRI), while powerful for characterization of tissue microstructure, suffers from long acquisition time. In this paper, we present a method for effective diffusion MRI reconstruction from slice-undersampled data. Instead of full diffusion-weighted (DW) image volumes, only a subsample of equally-spaced slices need to be acquired. We show that complementary information from DW volumes corresponding to different diffusion wavevectors can be harnessed using graph convolutional neural networks for reconstruction of the full DW volumes. The experimental results indicate a high acceleration factor of up to 5 can be achieved with minimal information loss.
View Article and Find Full Text PDF

Download full-text PDF

Source
http://dx.doi.org/10.1007/978-3-030-20351-1_41DOI Listing
http://www.ncbi.nlm.nih.gov/pmc/articles/PMC7065677PMC
June 2019

Surface-Volume Consistent Construction of Longitudinal Atlases for the Early Developing Brain.

Med Image Comput Comput Assist Interv 2019 Oct 10;11765:815-822. Epub 2019 Oct 10.

Department of Radiology and Biomedical Research Imaging Center (BRIC), University of North Carolina, Chapel Hill, USA.

Infant brain atlases are essential for characterizing structural changes in the developing brain. Volumetric and cortical atlases are typically constructed independently, potentially causing discrepancies between tissue boundaries and cortical surfaces. In this paper, we present a method for surface-volume consistent construction of longitudinal brain atlases of infants from 2 weeks to 12 months of age. We first construct the 12-month atlas via groupwise surface-constrained volumetric registration. The longitudinal displacements of each subject with respect to different time points are then transported parallelly to the 12-month atlas space. The 12-month cortico-volumetric atlas is finally warped temporally to each month prior to the 12th month using the transported displacements. Experimental results indicate that the longitudinal atlases generated are consistent in terms of tissue boundaries and cortical surfaces, hence allowing joint surface-volume analysis to be performed in a common space.
View Article and Find Full Text PDF

Download full-text PDF

Source
http://dx.doi.org/10.1007/978-3-030-32245-8_90DOI Listing
http://www.ncbi.nlm.nih.gov/pmc/articles/PMC7052685PMC
October 2019

Synthesized 7T MRI from 3T MRI via deep learning in spatial and wavelet domains.

Med Image Anal 2020 05 19;62:101663. Epub 2020 Feb 19.

Department of Radiology and BRIC, University of North Carolina at Chapel Hill, Chapel Hill, NC 27599, USA; Department of Brain and Cognitive Engineering, Korea University, Seoul 136713, South Korea. Electronic address:

Ultra-high field 7T MRI scanners, while producing images with exceptional anatomical details, are cost prohibitive and hence highly inaccessible. In this paper, we introduce a novel deep learning network that fuses complementary information from spatial and wavelet domains to synthesize 7T T1-weighted images from their 3T counterparts. Our deep learning network leverages wavelet transformation to facilitate effective multi-scale reconstruction, taking into account both low-frequency tissue contrast and high-frequency anatomical details. Our network utilizes a novel wavelet-based affine transformation (WAT) layer, which modulates feature maps from the spatial domain with information from the wavelet domain. Extensive experimental results demonstrate the capability of the proposed method in synthesizing high-quality 7T images with better tissue contrast and greater details, outperforming state-of-the-art methods.
View Article and Find Full Text PDF

Download full-text PDF

Source
http://dx.doi.org/10.1016/j.media.2020.101663DOI Listing
http://www.ncbi.nlm.nih.gov/pmc/articles/PMC7237331PMC
May 2020

Deep Multi-Scale Mesh Feature Learning for Automated Labeling of Raw Dental Surfaces From 3D Intraoral Scanners.

IEEE Trans Med Imaging 2020 07 5;39(7):2440-2450. Epub 2020 Feb 5.

Precisely labeling teeth on digitalized 3D dental surface models is the precondition for tooth position rearrangements in orthodontic treatment planning. However, it is a challenging task primarily due to the abnormal and varying appearance of patients' teeth. The emerging utilization of intraoral scanners (IOSs) in clinics further increases the difficulty in automated tooth labeling, as the raw surfaces acquired by IOS are typically low-quality at gingival and deep intraoral regions. In recent years, some pioneering end-to-end methods (e.g., PointNet) have been proposed in the communities of computer vision and graphics to consume directly raw surface for 3D shape segmentation. Although these methods are potentially applicable to our task, most of them fail to capture fine-grained local geometric context that is critical to the identification of small teeth with varying shapes and appearances. In this paper, we propose an end-to-end deep-learning method, called MeshSegNet, for automated tooth labeling on raw dental surfaces. Using multiple raw surface attributes as inputs, MeshSegNet integrates a series of graph-constrained learning modules along its forward path to hierarchically extract multi-scale local contextual features. Then, a dense fusion strategy is applied to combine local-to-global geometric features for the learning of higher-level features for mesh cell annotation. The predictions produced by our MeshSegNet are further post-processed by a graph-cut refinement step for final segmentation. We evaluated MeshSegNet using a real-patient dataset consisting of raw maxillary surfaces acquired by 3D IOS. Experimental results, performed 5-fold cross-validation, demonstrate that MeshSegNet significantly outperforms state-of-the-art deep learning methods for 3D shape segmentation.
View Article and Find Full Text PDF

Download full-text PDF

Source
http://dx.doi.org/10.1109/TMI.2020.2971730DOI Listing
July 2020

Large-scale dynamic causal modeling of major depressive disorder based on resting-state functional magnetic resonance imaging.

Hum Brain Mapp 2020 03 5;41(4):865-881. Epub 2019 Nov 5.

Department of Radiology and BRIC, University of North Carolina at Chapel Hill, Chapel Hill, North Carolina.

Major depressive disorder (MDD) is a serious mental illness characterized by dysfunctional connectivity among distributed brain regions. Previous connectome studies based on functional magnetic resonance imaging (fMRI) have focused primarily on undirected functional connectivity and existing directed effective connectivity (EC) studies concerned mostly task-based fMRI and incorporated only a few brain regions. To overcome these limitations and understand whether MDD is mediated by within-network or between-network connectivities, we applied spectral dynamic causal modeling to estimate EC of a large-scale network with 27 regions of interests from four distributed functional brain networks (default mode, executive control, salience, and limbic networks), based on large sample-size resting-state fMRI consisting of 100 healthy subjects and 100 individuals with first-episode drug-naive MDD. We applied a newly developed parametric empirical Bayes (PEB) framework to test specific hypotheses. We showed that MDD altered EC both within and between high-order functional networks. Specifically, MDD is associated with reduced excitatory connectivity mainly within the default mode network (DMN), and between the default mode and salience networks. In addition, the network-averaged inhibitory EC within the DMN was found to be significantly elevated in the MDD. The coexistence of the reduced excitatory but increased inhibitory causal connections within the DMNs may underlie disrupted self-recognition and emotional control in MDD. Overall, this study emphasizes that MDD could be associated with altered causal interactions among high-order brain functional networks.
View Article and Find Full Text PDF

Download full-text PDF

Source
http://dx.doi.org/10.1002/hbm.24845DOI Listing
http://www.ncbi.nlm.nih.gov/pmc/articles/PMC7268036PMC
March 2020

Estimating Reference Bony Shape Model for Personalized Surgical Reconstruction of Posttraumatic Facial Defects.

Med Image Comput Comput Assist Interv 2019 Oct 10;11768:327-335. Epub 2019 Oct 10.

BRIC and Department of Radiology, University of North Carolina at Chapel Hill, Chapel Hill, USA.

In this paper, we introduce a method for estimating patient-specific reference bony shape models for planning of reconstructive surgery for patients with acquired craniomaxillofacial (CMF) trauma. We propose an automatic bony shape estimation framework using pre-traumatic portrait photographs and post-traumatic head computed tomography (CT) scans. A 3D facial surface is first reconstructed from the patient's pre-traumatic photographs. An initial estimation of the patient's normal bony shape is then obtained with the reconstructed facial surface via sparse representation using a dictionary of paired facial and bony surfaces of normal subjects. We further refine the bony shape model by deforming the initial bony shape model to the post-traumatic 3D CT bony model, regularized by a statistical shape model built from a database of normal subjects. Experimental results show that our method is capable of effectively recovering the patient's normal facial bony shape in regions with defects, allowing CMF surgical planning to be performed precisely for a wider range of defects caused by trauma.
View Article and Find Full Text PDF

Download full-text PDF

Source
http://dx.doi.org/10.1007/978-3-030-32254-0_37DOI Listing
http://www.ncbi.nlm.nih.gov/pmc/articles/PMC6910247PMC
October 2019

Mitigating gyral bias in cortical tractography via asymmetric fiber orientation distributions.

Med Image Anal 2020 01 13;59:101543. Epub 2019 Sep 13.

Department of Radiology and Biomedical Research Imaging Center (BRIC) University of North Carolina at Chapel Hill, NC, U.S.A. Electronic address:

Diffusion tractography in brain connectomics often involves tracing axonal trajectories across gray-white matter boundaries in gyral blades of complex cortical convolutions. To date, gyral bias is observed in most tractography algorithms with streamlines predominantly terminating at gyral crowns instead of sulcal banks. This work demonstrates that asymmetric fiber orientation distribution functions (AFODFs), computed via a multi-tissue global estimation framework, can mitigate the effects of gyral bias, enabling fiber streamlines at gyral blades to make sharper turns into the cortical gray matter. We use ex-vivo data of an adult rhesus macaque and in-vivo data from the Human Connectome Project (HCP) to show that the fiber streamlines given by AFODFs bend more naturally into the cortex than the conventional symmetric FODFs in typical gyral blades. We demonstrate that AFODF tractography improves cortico-cortical connectivity and provides highly consistent outcomes between two different field strengths (3T and 7T).
View Article and Find Full Text PDF

Download full-text PDF

Source
http://dx.doi.org/10.1016/j.media.2019.101543DOI Listing
http://www.ncbi.nlm.nih.gov/pmc/articles/PMC6935166PMC
January 2020

Adversarial learning for mono- or multi-modal registration.

Med Image Anal 2019 12 24;58:101545. Epub 2019 Aug 24.

Department of Radiology and BRIC, University of North Carolina at Chapel Hill, Chapel Hill, NC, USA; Department of Brain and Cognitive Engineering, Korea University, Seoul 02841, Republic of Korea. Electronic address:

This paper introduces an unsupervised adversarial similarity network for image registration. Unlike existing deep learning registration methods, our approach can train a deformable registration network without the need of ground-truth deformations and specific similarity metrics. We connect a registration network and a discrimination network with a deformable transformation layer. The registration network is trained with the feedback from the discrimination network, which is designed to judge whether a pair of registered images are sufficiently similar. Using adversarial training, the registration network is trained to predict deformations that are accurate enough to fool the discrimination network. The proposed method is thus a general registration framework, which can be applied for both mono-modal and multi-modal image registration. Experiments on four brain MRI datasets and a multi-modal pelvic image dataset indicate that our method yields promising registration performance in accuracy, efficiency and generalizability compared with state-of-the-art registration methods, including those based on deep learning.
View Article and Find Full Text PDF

Download full-text PDF

Source
http://dx.doi.org/10.1016/j.media.2019.101545DOI Listing
http://www.ncbi.nlm.nih.gov/pmc/articles/PMC7455790PMC
December 2019

Fast Groupwise Registration Using Multi-Level and Multi-Resolution Graph Shrinkage.

Sci Rep 2019 09 3;9(1):12703. Epub 2019 Sep 3.

Department of Radiology and BRIC, University of North Carolina at Chapel Hill, Chapel Hill, NC, USA.

Groupwise registration aligns a set of images to a common space. It can however be inefficient and ineffective when dealing with datasets with significant anatomical variations. To mitigate these problems, we propose a groupwise registration framework based on hierarchical multi-level and multi-resolution shrinkage of a graph set. First, to deal with datasets with complex inhomogeneous image distributions, we divide the images hierarchically into multiple clusters. Since the images in each cluster have similar appearances, they can be registered effectively. Second, we employ a multi-resolution strategy to reduce computational cost. Experimental results on two public datasets show that our proposed method yields state-of-the-art registration accuracy with significantly reduced computational time.
View Article and Find Full Text PDF

Download full-text PDF

Source
http://dx.doi.org/10.1038/s41598-019-48491-9DOI Listing
http://www.ncbi.nlm.nih.gov/pmc/articles/PMC6722141PMC
September 2019

One-Shot Generative Adversarial Learning for MRI Segmentation of Craniomaxillofacial Bony Structures.

IEEE Trans Med Imaging 2020 03 14;39(3):787-796. Epub 2019 Aug 14.

Compared to computed tomography (CT), magnetic resonance imaging (MRI) delineation of craniomaxillofacial (CMF) bony structures can avoid harmful radiation exposure. However, bony boundaries are blurry in MRI, and structural information needs to be borrowed from CT during the training. This is challenging since paired MRI-CT data are typically scarce. In this paper, we propose to make full use of unpaired data, which are typically abundant, along with a single paired MRI-CT data to construct a one-shot generative adversarial model for automated MRI segmentation of CMF bony structures. Our model consists of a cross-modality image synthesis sub-network, which learns the mapping between CT and MRI, and an MRI segmentation sub-network. These two sub-networks are trained jointly in an end-to-end manner. Moreover, in the training phase, a neighbor-based anchoring method is proposed to reduce the ambiguity problem inherent in cross-modality synthesis, and a feature-matching-based semantic consistency constraint is proposed to encourage segmentation-oriented MRI synthesis. Experimental results demonstrate the superiority of our method both qualitatively and quantitatively in comparison with the state-of-the-art MRI segmentation methods.
View Article and Find Full Text PDF

Download full-text PDF

Source
http://dx.doi.org/10.1109/TMI.2019.2935409DOI Listing
http://www.ncbi.nlm.nih.gov/pmc/articles/PMC7219540PMC
March 2020