Publications by authors named "Chunfeng Lian"

40 Publications

Quantity and Morphology of Perivascular Spaces: Associations With Vascular Risk Factors and Cerebral Small Vessel Disease.

J Magn Reson Imaging 2021 May 17. Epub 2021 May 17.

Department of Radiology, The Second Affiliated Hospital of Zhejiang University School of Medicine, Hangzhou, China.

Background: Perivascular spaces (PVSs) are important component of the brain glymphatic system. While visual rating has been widely used to assess PVS, computational measures may have higher sensitivity for capturing PVS characteristics under disease conditions.

Purpose: To compute quantitative and morphological PVS features and to assess their associations with vascular risk factors and cerebral small vessel disease (CSVD).

Study Type: Prospective.

Population: One hundred sixty-one middle-aged/later middle-aged subjects (age = 60.4 ± 7.3).

Sequence: 3D T1-weighted, T2-weighted and T2-FLAIR sequences, and susceptibility-weighted multiecho gradient-echo sequence on a 3 T scanner.

Assessment: Automated PVS segmentation was performed on sub-millimeter T2-weighted images. Quantitative and morphological PVS features were calculated in white matter (WM) and basal ganglia (BG) regions, including volume, count, size, length (L ), width (L ), and linearity. Visual PVS scores were also acquired for comparison.

Statistical Tests: Simple and multiple linear regression analyses were used to explore the associations among variables.

Results: WM-PVS visual score and count were associated with hypertension (β = 0.161, P < 0.05; β = 0.193, P < 0.05), as were BG-PVS rating score, volume, count and L (β = 0.197, P < 0.05; β = 0.170, P < 0.05; β = 0.200, P < 0.05; β = 0.172, P < 0.05). WM-PVS size was associated with diabetes (β = 0.165, P < 0.05). WM-PVS and BG-PVS were associated with CSVD markers, especially white matter hyperintensities (WMHs) (P < 0.05). Multiple regression analysis showed that WM/BG-PVS quantitative measures were widely associated with vascular risk factors and CSVD markers (P < 0.05). Morphological measures were associated with WMH severity in WM region and also associated with lacunes and microbleeds (P < 0.05) in BG region.

Data Conclusion: These novel PVS measures may capture mild PVS alterations driven by different pathologies.

Evidence Level: 2 TECHNICAL EFFICACY: Stage 2.
View Article and Find Full Text PDF

Download full-text PDF

Source
http://dx.doi.org/10.1002/jmri.27702DOI Listing
May 2021

Diverse data augmentation for learning image segmentation with cross-modality annotations.

Med Image Anal 2021 Jul 20;71:102060. Epub 2021 Apr 20.

Department of Radiology and Biomedical Research Imaging Center (BRIC), University of North Carolina, Chapel Hill, NC, USA. Electronic address:

The dearth of annotated data is a major hurdle in building reliable image segmentation models. Manual annotation of medical images is tedious, time-consuming, and significantly variable across imaging modalities. The need for annotation can be ameliorated by leveraging an annotation-rich source modality in learning a segmentation model for an annotation-poor target modality. In this paper, we introduce a diverse data augmentation generative adversarial network (DDA-GAN) to train a segmentation model for an unannotated target image domain by borrowing information from an annotated source image domain. This is achieved by generating diverse augmented data for the target domain by one-to-many source-to-target translation. The DDA-GAN uses unpaired images from the source and target domains and is an end-to-end convolutional neural network that (i) explicitly disentangles domain-invariant structural features related to segmentation from domain-specific appearance features, (ii) combines structural features from the source domain with appearance features randomly sampled from the target domain for data augmentation, and (iii) train the segmentation model with the augmented data in the target domain and the annotations from the source domain. The effectiveness of our method is demonstrated both qualitatively and quantitatively in comparison with the state of the art for segmentation of craniomaxillofacial bony structures via MRI and cardiac substructures via CT.
View Article and Find Full Text PDF

Download full-text PDF

Source
http://dx.doi.org/10.1016/j.media.2021.102060DOI Listing
http://www.ncbi.nlm.nih.gov/pmc/articles/PMC8184609PMC
July 2021

HF-UNet: Learning Hierarchically Inter-Task Relevance in Multi-Task U-Net for Accurate Prostate Segmentation in CT images.

IEEE Trans Med Imaging 2021 Apr 13;PP. Epub 2021 Apr 13.

Accurate segmentation of the prostate is a key step in external beam radiation therapy treatments. In this paper, we tackle the challenging task of prostate segmentation in CT images by a two-stage network with 1) the first stage to fast localize, and 2) the second stage to accurately segment the prostate. To precisely segment the prostate in the second stage, we formulate prostate segmentation into a multi-task learning framework, which includes a main task to segment the prostate, and an auxiliary task to delineate the prostate boundary. Here, the second task is applied to provide additional guidance of unclear prostate boundary in CT images. Besides, the conventional multi-task deep networks typically share most of the parameters (i.e., feature representations) across all tasks, which may limit their data fitting ability, as the specificity of different tasks are inevitably ignored. By contrast, we solve them by a hierarchically-fused U-Net structure, namely HF-UNet. The HF-UNet has two complementary branches for two tasks, with the novel proposed attention-based task consistency learning block to communicate at each level between the two decoding branches. Therefore, HF-UNet endows the ability to learn hierarchically the shared representations for different tasks, and preserve the specificity of learned representations for different tasks simultaneously. We did extensive evaluations of the proposed method on a large planning CT image dataset and a benchmark prostate zonal dataset. The experimental results show HF-UNet outperforms the conventional multi-task network architectures and the state-of-the-art methods.
View Article and Find Full Text PDF

Download full-text PDF

Source
http://dx.doi.org/10.1109/TMI.2021.3072956DOI Listing
April 2021

Factors Associated With the Dilation of Perivascular Space in Healthy Elderly Subjects.

Front Aging Neurosci 2021 26;13:624732. Epub 2021 Mar 26.

Department of Radiology, The Second Affiliated Hospital, Zhejiang University School of Medicine, Hangzhou, China.

The dilation of perivascular space (PVS) has been widely used to reflect brain degeneration in clinical brain imaging studies. However, PVS characteristics exhibit large differences in healthy subjects. Such variations need to be better addressed before PVS can be used to reflect pathological changes. In the present study, we aim to investigate the potential influence of several related factors on PVS dilation in healthy elderly subjects. One-hundred and three subjects (mean age = 59.5) were retrospectively included from a prospectively collected community cohort. Multi-modal high-resolution magnetic resonance imaging and cognitive assessments were performed on each subject. Machine-learning based segmentation methods were employed to quantify PVS volume and white matter hyperintensity (WMH) volume. Multiple regression analysis was performed to reveal the influence of demographic factors, vascular risk factors, intracranial volume (ICV), major brain artery diameters, and brain atrophy on PVS dilation. Multiple regression analysis showed that age was positively associated with the basal ganglia (BG) (standardized beta = 0.227, = 0.027) and deep white matter (standardized beta = 0.220, = 0.029) PVS volume. Hypertension was positively associated with deep white matter PVS volume (standardized beta = 0.234, = 0.017). Furthermore, we found that ICV was strongly associated with the deep white matter PVS volume (standardized beta = 0.354, < 0.001) while the intracranial artery diameter was negatively associated with the deep white matter PVS volume (standardized beta = -0.213, = 0.032). Intracranial volume has significant influence on deep white matter PVS volume. Future studies on PVS dilation should include ICV as an important covariate.
View Article and Find Full Text PDF

Download full-text PDF

Source
http://dx.doi.org/10.3389/fnagi.2021.624732DOI Listing
http://www.ncbi.nlm.nih.gov/pmc/articles/PMC8032856PMC
March 2021

MetricUNet: Synergistic image- and voxel-level learning for precise prostate segmentation via online sampling.

Med Image Anal 2021 Jul 23;71:102039. Epub 2021 Mar 23.

School of Biomedical Engineering, ShanghaiTech University, Shanghai, China; Department of Research and Development, Shanghai United Imaging Intelligence Co., Ltd., Shanghai, China; Department of Artificial Intelligence, Korea University, Seoul 02841, Republic of Korea. Electronic address:

Fully convolutional networks (FCNs), including UNet and VNet, are widely-used network architectures for semantic segmentation in recent studies. However, conventional FCN is typically trained by the cross-entropy or Dice loss, which only calculates the error between predictions and ground-truth labels for pixels individually. This often results in non-smooth neighborhoods in the predicted segmentation. This problem becomes more serious in CT prostate segmentation as CT images are usually of low tissue contrast. To address this problem, we propose a two-stage framework, with the first stage to quickly localize the prostate region, and the second stage to precisely segment the prostate by a multi-task UNet architecture. We introduce a novel online metric learning module through voxel-wise sampling in the multi-task network. Therefore, the proposed network has a dual-branch architecture that tackles two tasks: (1) a segmentation sub-network aiming to generate the prostate segmentation, and (2) a voxel-metric learning sub-network aiming to improve the quality of the learned feature space supervised by a metric loss. Specifically, the voxel-metric learning sub-network samples tuples (including triplets and pairs) in voxel-level through the intermediate feature maps. Unlike conventional deep metric learning methods that generate triplets or pairs in image-level before the training phase, our proposed voxel-wise tuples are sampled in an online manner and operated in an end-to-end fashion via multi-task learning. To evaluate the proposed method, we implement extensive experiments on a real CT image dataset consisting 339 patients. The ablation studies show that our method can effectively learn more representative voxel-level features compared with the conventional learning methods with cross-entropy or Dice loss. And the comparisons show that the proposed method outperforms the state-of-the-art methods by a reasonable margin.
View Article and Find Full Text PDF

Download full-text PDF

Source
http://dx.doi.org/10.1016/j.media.2021.102039DOI Listing
July 2021

Dilated perivascular space is related to reduced free-water in surrounding white matter among healthy adults and elderlies but not in patients with severe cerebral small vessel disease.

J Cereb Blood Flow Metab 2021 Apr 4:271678X211005875. Epub 2021 Apr 4.

Department of Radiology, School of Medicine, Second Affiliated Hospital of Zhejiang University, Zhejiang, China.

Perivascular space facilitates cerebral interstitial water clearance. However, it is unclear how dilated perivascular space (dPVS) affects the interstitial water of surrounding white matter. We aimed to determine the presence and extent of changes in normal-appearing white matter water components around dPVS in different populations. Twenty healthy elderly subjects and 15 elderly subjects with severe cerebral small vessel disease (CSVD, with lacunar infarction 6 months before the scan) were included in our study. And other 28 healthy adult subjects were enrolled under a different scanning parameter to see if the results are comparable. The normal-appearing white matter around dPVS was categorized into 10 layers (1 mm thickness each) based on their distance to dPVS. We evaluated the mean isotropic-diffusing water volume fraction in each layer. We discovered a significantly reduced free-water content in the layers closely adjacent to the dPVS in the healthy elderlies. however, this reduction around dPVS was weaker in the CSVD subjects. We also discovered an elevated free-water content within dPVS. DPVS played different roles in healthy subjects or CSVD subjects. The reduced water content around dPVS in healthy subjects suggests these MR-visible PVSs are not always related to the stagnation of fluid.
View Article and Find Full Text PDF

Download full-text PDF

Source
http://dx.doi.org/10.1177/0271678X211005875DOI Listing
April 2021

Deep white matter hyperintensity is associated with the dilation of perivascular space.

J Cereb Blood Flow Metab 2021 Mar 24:271678X211002279. Epub 2021 Mar 24.

Department of Radiology, The Second Affiliated Hospital, Zhejiang University School of Medicine, Hangzhou, China.

Understanding the pathophysiology of white matter hyperintensity (WMH) is necessary to reduce its harmfulness. Dilated perivascular space (PVS) had been found related to WMH. In the present study, we aimed to examine the topological connections between WMH and PVS, and to investigate whether increased interstitial fluid mediates the correlation between PVS and WMH volumes. One hundred and thirty-six healthy elder subjects were retrospectively included from a prospectively collected community cohort. Sub-millimeter T2 weighted and FLAIR images were acquired for assessing the association between PVS and WMH. Diffusion tensor imaging and free-water (FW) analytical methods were used to quantify white matter free water content, and to explore whether it mediates the PVS-WMH association. We found that most (89%) of the deep WMH lesions were spatially connected with PVS, exhibiting several interesting topological types. PVS and WMH volumes were also significantly correlated (r = 0.222, p < 0.001). FW mediated this association in the whole sample (β = 0.069, p = 0.037) and in subjects with relatively high WMH load (β = 0.118, p = 0.006). These findings suggest a tight association between PVS dilation and WMH formation, which might be linked by the impaired glymphatic drainage function and accumulated local interstitial fluid.
View Article and Find Full Text PDF

Download full-text PDF

Source
http://dx.doi.org/10.1177/0271678X211002279DOI Listing
March 2021

3D morphometric quantification of maxillae and defects for patients with unilateral cleft palate via deep learning-based CBCT image auto-segmentation.

Orthod Craniofac Res 2021 Mar 12. Epub 2021 Mar 12.

Division of Orthodontics, College of Dentistry, The Ohio State University, Columbus, OH, USA.

Objective: This study aimed to quantify the 3D asymmetry of the maxilla in patients with unilateral cleft lip and palate (UCP) and investigate the defect factors responsible for the variability of the maxilla on the cleft side using a deep-learning-based CBCT image segmentation protocol.

Setting And Sample Population: Cone beam computed tomography (CBCT) images of 60 patients with UCP were acquired. The samples in this study consisted of 39 males and 21 females, with a mean age of 11.52 years (SD = 3.27 years; range of 8-18 years).

Materials And Methods: The deep-learning-based protocol was used to segment the maxilla and defect initially, followed by manual refinement. Paired t-tests were performed to characterize the maxillary asymmetry. A multiple linear regression was carried out to investigate the relationship between the defect parameters and those of the cleft side of the maxilla.

Results: The cleft side of the maxilla demonstrated a significant decrease in maxillary volume and length as well as alveolar length, anterior width, posterior width, anterior height and posterior height. A significant increase in maxillary anterior width was demonstrated on the cleft side of the maxilla. There was a close relationship between the defect parameters and those of the cleft side of the maxilla.

Conclusions: Based on the 3D volumetric segmentations, significant hypoplasia of the maxilla on the cleft side existed in the pyriform aperture and alveolar crest area near the defect. The defect structures appeared to contribute to the variability of the maxilla on the cleft side.
View Article and Find Full Text PDF

Download full-text PDF

Source
http://dx.doi.org/10.1111/ocr.12482DOI Listing
March 2021

Multi-Task Weakly-Supervised Attention Network for Dementia Status Estimation With Structural MRI.

IEEE Trans Neural Netw Learn Syst 2021 Mar 3;PP. Epub 2021 Mar 3.

Accurate prediction of clinical scores (of neuropsychological tests) based on noninvasive structural magnetic resonance imaging (MRI) helps understand the pathological stage of dementia (e.g., Alzheimer's disease (AD)) and forecast its progression. Existing machine/deep learning approaches typically preselect dementia-sensitive brain locations for MRI feature extraction and model construction, potentially leading to undesired heterogeneity between different stages and degraded prediction performance. Besides, these methods usually rely on prior anatomical knowledge (e.g., brain atlas) and time-consuming nonlinear registration for the preselection of brain locations, thereby ignoring individual-specific structural changes during dementia progression because all subjects share the same preselected brain regions. In this article, we propose a multi-task weakly-supervised attention network (MWAN) for the joint regression of multiple clinical scores from baseline MRI scans. Three sequential components are included in MWAN: 1) a backbone fully convolutional network for extracting MRI features; 2) a weakly supervised dementia attention block for automatically identifying subject-specific discriminative brain locations; and 3) an attention-aware multitask regression block for jointly predicting multiple clinical scores. The proposed MWAN is an end-to-end and fully trainable deep learning model in which dementia-aware holistic feature learning and multitask regression model construction are integrated into a unified framework. Our MWAN method was evaluated on two public AD data sets for estimating clinical scores of mini-mental state examination (MMSE), clinical dementia rating sum of boxes (CDRSB), and AD assessment scale cognitive subscale (ADAS-Cog). Quantitative experimental results demonstrate that our method produces superior regression performance compared with state-of-the-art methods. Importantly, qualitative results indicate that the dementia-sensitive brain locations automatically identified by our MWAN method well retain individual specificities and are biologically meaningful.
View Article and Find Full Text PDF

Download full-text PDF

Source
http://dx.doi.org/10.1109/TNNLS.2021.3055772DOI Listing
March 2021

White Matter Free Water is a Composite Marker of Cerebral Small Vessel Degeneration.

Transl Stroke Res 2021 Feb 25. Epub 2021 Feb 25.

Department of Radiology, The Second Affiliated Hospital, Zhejiang University School of Medicine, Hangzhou, 310000, China.

To investigate the association between white matter free water (FW) and common imaging markers of cerebral small vessel diseases (CSVD) in two groups of subjects with different clinical status. One hundred and forty-four community subjects (mean age 60.5) and 84 CSVD subjects (mean age 61.2) were retrospectively included in the present study. All subjects received multi-modal magnetic resonance imaging and clinical assessments. The association between white matter FW and common CSVD imaging markers, including white matter hyperintensities (WMH), dilated perivascular space (PVS), lacunes, and microbleeds, were assessed using simple and multiple regression analysis. The association between FW and cognitive scores were also investigated. White matter FW was positively associated with WMH volume (β = 0.270, p = 0.001), PVS volume (β = 0.290, p < 0.001), number of microbleeds (β = 0.148, p = 0.043), and age (β = 0.170, p = 0.036) in the community cohort. In the CSVD cohort, FW was positively associated with WMH volume (β = 0.648, p < 0.001), PVS score (β = 0.224, p < 0.001), number of lacunes (β = 0.140, p = 0.046), and sex (β = 0.125, p = 0.036). The associations between FW and cognitive scores were stronger than conventional CSVD markers in both datasets. White matter FW is a potential composite marker that can sensitively detect cerebral small vessel degeneration and also reflect cognitive impairments.
View Article and Find Full Text PDF

Download full-text PDF

Source
http://dx.doi.org/10.1007/s12975-021-00899-0DOI Listing
February 2021

Estimating Reference Bony Shape Models for Orthognathic Surgical Planning Using 3D Point-Cloud Deep Learning.

IEEE J Biomed Health Inform 2021 Jan 26;PP. Epub 2021 Jan 26.

Orthognathic surgical outcomes rely heavily on the quality of surgical planning. Automatic estimation of a reference facial bone shape significantly reduces experience-dependent variability and improves planning accuracy and efficiency. We propose an end-to-end deep learning framework to estimate patient-specific reference bony shape models for patients with orthognathic deformities. Specifically, we apply a point-cloud network to learn a vertex-wise deformation field from a patients deformed bony shape, represented as a point cloud. The estimated deformation field is then used to correct the deformed bony shape to output a patient-specific reference bony surface model. To train our network effectively, we introduce a simulation strategy to synthesize deformed bones from any given normal bone, producing a relatively large and diverse dataset of shapes for training. Our method was evaluated using both synthetic and real patient data. Experimental results show that our framework estimates realistic reference bony shape models for patients with varying deformities. The performance of our method is consistently better than an existing method and several deep point-cloud networks. Our end-to-end estimation framework based on geometric deep learning shows great potential for improving clinical workflows.
View Article and Find Full Text PDF

Download full-text PDF

Source
http://dx.doi.org/10.1109/JBHI.2021.3054494DOI Listing
January 2021

Deep Bayesian Hashing With Center Prior for Multi-Modal Neuroimage Retrieval.

IEEE Trans Med Imaging 2021 Feb 2;40(2):503-513. Epub 2021 Feb 2.

Multi-modal neuroimage retrieval has greatly facilitated the efficiency and accuracy of decision making in clinical practice by providing physicians with previous cases (with visually similar neuroimages) and corresponding treatment records. However, existing methods for image retrieval usually fail when applied directly to multi-modal neuroimage databases, since neuroimages generally have smaller inter-class variation and larger inter-modal discrepancy compared to natural images. To this end, we propose a deep Bayesian hash learning framework, called CenterHash, which can map multi-modal data into a shared Hamming space and learn discriminative hash codes from imbalanced multi-modal neuroimages. The key idea to tackle the small inter-class variation and large inter-modal discrepancy is to learn a common center representation for similar neuroimages from different modalities and encourage hash codes to be explicitly close to their corresponding center representations. Specifically, we measure the similarity between hash codes and their corresponding center representations and treat it as a center prior in the proposed Bayesian learning framework. A weighted contrastive likelihood loss function is also developed to facilitate hash learning from imbalanced neuroimage pairs. Comprehensive empirical evidence shows that our method can generate effective hash codes and yield state-of-the-art performance in cross-modal retrieval on three multi-modal neuroimage datasets.
View Article and Find Full Text PDF

Download full-text PDF

Source
http://dx.doi.org/10.1109/TMI.2020.3030752DOI Listing
http://www.ncbi.nlm.nih.gov/pmc/articles/PMC7909752PMC
February 2021

Anatomy-Regularized Representation Learning for Cross-Modality Medical Image Segmentation.

IEEE Trans Med Imaging 2021 01 29;40(1):274-285. Epub 2020 Dec 29.

An increasing number of studies are leveraging unsupervised cross-modality synthesis to mitigate the limited label problem in training medical image segmentation models. They typically transfer ground truth annotations from a label-rich imaging modality to a label-lacking imaging modality, under an assumption that different modalities share the same anatomical structure information. However, since these methods commonly use voxel/pixel-wise cycle-consistency to regularize the mappings between modalities, high-level semantic information is not necessarily preserved. In this paper, we propose a novel anatomy-regularized representation learning approach for segmentation-oriented cross-modality image synthesis. It learns a common feature encoding across different modalities to form a shared latent space, where 1) the input and its synthesis present consistent anatomical structure information, and 2) the transformation between two images in one domain is preserved by their syntheses in another domain. We applied our method to the tasks of cross-modality skull segmentation and cardiac substructure segmentation. Experimental results demonstrate the superiority of our method in comparison with state-of-the-art cross-modality medical image segmentation methods.
View Article and Find Full Text PDF

Download full-text PDF

Source
http://dx.doi.org/10.1109/TMI.2020.3025133DOI Listing
http://www.ncbi.nlm.nih.gov/pmc/articles/PMC8120796PMC
January 2021

Attention-Guided Hybrid Network for Dementia Diagnosis With Structural MR Images.

IEEE Trans Cybern 2020 Jul 28;PP. Epub 2020 Jul 28.

Deep-learning methods (especially convolutional neural networks) using structural magnetic resonance imaging (sMRI) data have been successfully applied to computer-aided diagnosis (CAD) of Alzheimer's disease (AD) and its prodromal stage [i.e., mild cognitive impairment (MCI)]. As it is practically challenging to capture local and subtle disease-associated abnormalities directly from the whole-brain sMRI, most of those deep-learning approaches empirically preselect disease-associated sMRI brain regions for model construction. Considering that such isolated selection of potentially informative brain locations might be suboptimal, very few methods have been proposed to perform disease-associated discriminative region localization and disease diagnosis in a unified deep-learning framework. However, those methods based on task-oriented discriminative localization still suffer from two common limitations, that is: 1) identified brain locations are strictly consistent across all subjects, which ignores the unique anatomical characteristics of each brain and 2) only limited local regions/patches are used for model training, which does not fully utilize the global structural information provided by the whole-brain sMRI. In this article, we propose an attention-guided deep-learning framework to extract multilevel discriminative sMRI features for dementia diagnosis. Specifically, we first design a backbone fully convolutional network to automatically localize the discriminative brain regions in a weakly supervised manner. Using the identified disease-related regions as spatial attention guidance, we further develop a hybrid network to jointly learn and fuse multilevel sMRI features for CAD model construction. Our proposed method was evaluated on three public datasets (i.e., ADNI-1, ADNI-2, and AIBL), showing superior performance compared with several state-of-the-art methods in both tasks of AD diagnosis and MCI conversion prediction.
View Article and Find Full Text PDF

Download full-text PDF

Source
http://dx.doi.org/10.1109/TCYB.2020.3005859DOI Listing
http://www.ncbi.nlm.nih.gov/pmc/articles/PMC7855081PMC
July 2020

High-Resolution Breast MRI Reconstruction Using a Deep Convolutional Generative Adversarial Network.

J Magn Reson Imaging 2020 12 12;52(6):1852-1858. Epub 2020 Jul 12.

Department of Research and Development, Shanghai United Imaging Intelligence Co.,Ltd., Shanghai, China.

Background: A generative adversarial network could be used for high-resolution (HR) medical image synthesis with reduced scan time.

Purpose: To evaluate the potential of using a deep convolutional generative adversarial network (DCGAN) for generating HR and HR images based on their corresponding low-resolution (LR) images (LR and LR ).

Study Type: This was a retrospective analysis of a prospectively acquired cohort.

Population: In all, 224 subjects were randomly divided into 200 training subjects and an independent 24 subjects testing set.

Field Strength/sequence: Dynamic contrast-enhanced (DCE) MRI with a 1.5T scanner.

Assessment: Three breast radiologists independently ranked the image datasets, using the DCE images as the ground truth, and reviewed the image quality of both the original LR images and the generated HR images. The BI-RADS category and conspicuity of lesions were also ranked. The inter/intracorrelation coefficients (ICCs) of mean image quality scores, lesion conspicuity scores, and Breast Imaging Reporting and Data System (BI-RADS) categories were calculated between the three readers.

Statistical Test: Wilcoxon signed-rank tests evaluated differences among the multireader ranking scores.

Results: The mean overall image quality scores of the generated HR and HR were significantly higher than those of the original LR and LR (4.77 ± 0.41 vs. 3.27 ± 0.43 and 4.72 ± 0.44 vs. 3.23 ± 0.43, P < 0.0001, respectively, in the multireader study). The mean lesion conspicuity scores of the generated HR and HR were significantly higher than those of the original LR and LR (4.18 ± 0.70 vs. 3.49 ± 0.58 and 4.35 ± 0.59 vs. 3.48 ± 0.61, P < 0.001, respectively, in the multireader study). The ICCs of the image quality scores, lesion conspicuity scores, and BI-RADS categories had good agreements among the three readers (all ICCs >0.75).

Data Conclusion: DCGAN was capable of generating HR of the breast from fast pre- and postcontrast LR and achieved superior quantitative and qualitative performance in a multireader study.

Level Of Evidence: 3 TECHNICAL EFFICACY STAGE: 2 J. MAGN. RESON. IMAGING 2020;52:1852-1858.
View Article and Find Full Text PDF

Download full-text PDF

Source
http://dx.doi.org/10.1002/jmri.27256DOI Listing
December 2020

Morphology of perivascular spaces and enclosed blood vessels in young to middle-aged healthy adults at 7T: Dependences on age, brain region, and breathing gas.

Neuroimage 2020 09 21;218:116978. Epub 2020 May 21.

Biomedical Research Imaging Center, Chapel Hill, NC, USA; Department of Radiology, University of North Carolina at Chapel Hill, Chapel Hill, NC, USA.

Perivascular spaces (PVSs) are fluid-filled spaces surrounding penetrating blood vessels in the brain and are an integral pathway of the glymphatic system. A PVS and the enclosed blood vessel are commonly visualized as a single vessel-like complex (denoted as PVSV) in high-resolution MRI images. Quantitative characterization of the PVSV morphology in MRI images in healthy subjects may serve as a reference for detecting disease related PVS and/or blood vessel alterations in patients with brain diseases. To this end, we evaluated the age dependences, spatial heterogeneities, and dynamic properties of PVSV morphological features in 45 healthy subjects (21-55 years old), using an ultra-high-resolution three-dimensional transverse relaxation time weighted MRI sequence (0.41 ​× ​0.41 ​× ​0.4 ​mm) at 7T. Quantitative PVSV parameters, including apparent diameter, count, volume fraction (VF), and relative contrast to noise ratio (rCNR) were calculated in the white matter and subcortical structures. Dynamic changes were induced by carbogen breathing which are known to induce vasodilation and increase the blood oxygenation level in the brain. PVSV count and VF significantly increased with age in basal ganglia (BG), so did rCNR in BG, midbrain, and white matter (WM). Apparent PVSV diameter also showed a positive association with age in the three brain regions, although it did not reach statistical significance. The PVSV VF and count showed large inter-subject variations, with coefficients of variation ranging from 0.17 to 0.74 after regressing out age and gender effects. Both apparent diameter and VF exhibited significant spatial heterogeneity, which cannot be explained solely by radio-frequency field inhomogeneities. Carbogen breathing significantly increased VF in BG and WM, and rCNR in thalamus, BG, and WM compared to air breathing. Our results are consistent with gradual dilation of PVSs with age in healthy adults. The PVSV morphology exhibited spatial heterogeneity and large inter-subject variations and changed during carbogen breathing compared to air breathing.
View Article and Find Full Text PDF

Download full-text PDF

Source
http://dx.doi.org/10.1016/j.neuroimage.2020.116978DOI Listing
http://www.ncbi.nlm.nih.gov/pmc/articles/PMC7485170PMC
September 2020

Designing weighted correlation kernels in convolutional neural networks for functional connectivity based brain disease diagnosis.

Med Image Anal 2020 07 23;63:101709. Epub 2020 Apr 23.

Department of Radiology and BRIC, University of North Carolina at Chapel Hill, North Carolina 27599, USA; Department of Brain and Cognitive Engineering, Korea University, Seoul, Korea. Electronic address:

Functional connectivity networks (FCNs) based on functional magnetic resonance imaging (fMRI) have been widely applied to analyzing and diagnosing brain diseases, such as Alzheimer's disease (AD) and its prodrome stage, i.e., mild cognitive impairment (MCI). Existing studies usually use Pearson correlation coefficient (PCC) method to construct FCNs, and then extract network measures (e.g., clustering coefficients) as features to learn a diagnostic model. However, the valuable observation information in network construction (e.g., specific contributions of different time points), as well as high-level and high-order network features are neglected in these studies. In this paper, we first define a novel weighted correlation kernel (called wc-kernel) to measure the correlation of brain regions, by which weighting factors are learned in a data-driven manner to characterize the contributions of different time points, thus conveying the richer interaction information among brain regions compared with the PCC method. Furthermore, we build a wc-kernel based convolutional neural network (CNN) (called wck-CNN) framework for learning the hierarchical (i.e., from local to global and also from low-level to high-level) features for disease diagnosis, by using fMRI data. Specifically, we first define a layer to build dynamic FCNs using our proposed wc-kernels. Then, we define another three layers to sequentially extract local (brain region specific), global (brain network specific) and temporal features from the constructed dynamic FCNs for classification. Experimental results on 174 subjects (a total of 563 scans) with rest-state fMRI (rs-fMRI) data from ADNI database demonstrate the efficacy of our proposed method.
View Article and Find Full Text PDF

Download full-text PDF

Source
http://dx.doi.org/10.1016/j.media.2020.101709DOI Listing
July 2020

Estimating Reference Shape Model for Personalized Surgical Reconstruction of Craniomaxillofacial Defects.

IEEE Trans Biomed Eng 2021 02 20;68(2):362-373. Epub 2021 Jan 20.

Objective: To estimate a patient-specific reference bone shape model for a patient with craniomaxillofacial (CMF) defects due to facial trauma.

Methods: We proposed an automatic facial bone shape estimation framework using pre-traumatic conventional portrait photos and post-traumatic head computed tomography (CT) scans via a 3D face reconstruction and a deformable shape model. Specifically, a three-dimensional (3D) face was first reconstructed from the patient's pre-traumatic portrait photos. Second, a correlation model between the skin and bone surfaces was constructed using a sparse representation based on the CT images of training normal subjects. Third, by feeding the reconstructed 3D face into the correlation model, an initial reference shape model was generated. In addition, we refined the initial estimation by applying non-rigid surface matching between the initially estimated shape and the patient's post-traumatic bone based on the adaptive-focus deformable shape model (AFDSM). Furthermore, a statistical shape model, built from the training normal subjects, was utilized to constrain the deformation process to avoid overfitting.

Results And Conclusion: The proposed method was evaluated using both synthetic and real patient data. Experimental results show that the patient's abnormal facial bony structure can be recovered using our method, and the estimated reference shape model is considered clinically acceptable by an experienced CMF surgeon.

Significance: The proposed method is more suitable to the complex CMF defects for CMF reconstructive surgical planning.
View Article and Find Full Text PDF

Download full-text PDF

Source
http://dx.doi.org/10.1109/TBME.2020.2990586DOI Listing
http://www.ncbi.nlm.nih.gov/pmc/articles/PMC8163108PMC
February 2021

Spatially-Constrained Fisher Representation for Brain Disease Identification With Incomplete Multi-Modal Neuroimages.

IEEE Trans Med Imaging 2020 09 24;39(9):2965-2975. Epub 2020 Mar 24.

Multi-modal neuroimages, such as magnetic resonance imaging (MRI) and positron emission tomography (PET), can provide complementary structural and functional information of the brain, thus facilitating automated brain disease identification. Incomplete data problem is unavoidable in multi-modal neuroimage studies due to patient dropouts and/or poor data quality. Conventional methods usually discard data-missing subjects, thus significantly reducing the number of training samples. Even though several deep learning methods have been proposed, they usually rely on pre-defined regions-of-interest in neuroimages, requiring disease-specific expert knowledge. To this end, we propose a spatially-constrained Fisher representation framework for brain disease diagnosis with incomplete multi-modal neuroimages. We first impute missing PET images based on their corresponding MRI scans using a hybrid generative adversarial network. With the complete (after imputation) MRI and PET data, we then develop a spatially-constrained Fisher representation network to extract statistical descriptors of neuroimages for disease diagnosis, assuming that these descriptors follow a Gaussian mixture model with a strong spatial constraint (i.e., images from different subjects have similar anatomical structures). Experimental results on three databases suggest that our method can synthesize reasonable neuroimages and achieve promising results in brain disease identification, compared with several state-of-the-art methods.
View Article and Find Full Text PDF

Download full-text PDF

Source
http://dx.doi.org/10.1109/TMI.2020.2983085DOI Listing
http://www.ncbi.nlm.nih.gov/pmc/articles/PMC7485604PMC
September 2020

Revealing Developmental Regionalization of Infant Cerebral Cortex Based on Multiple Cortical Properties.

Med Image Comput Comput Assist Interv 2019 Oct 10;11765:841-849. Epub 2019 Oct 10.

Department of Radiology and BRIC, University of North Carolina at Chapel Hill, Chapel Hill, NC, USA.

The human brain develops dynamically and regionally heterogeneously during the first two postnatal years. Cortical developmental regionalization, i.e., the landscape of cortical heterogeneity in development, reflects the organization of underlying microstructures, which are closely related to the functional principles of the cortex. Therefore, prospecting early cortical developmental regionalization can provide neurobiologically meaningful units for precise region localization, which will advance our understanding on brain development in this critical period. However, due to the absence of dedicated computational tools and large-scale datasets, our knowledge on early cortical developmental regionalization still remains intact. To fill both the methodological and knowledge gaps, we propose to explore the cortical developmental regionalization using a novel method based on nonnegative matrix factorization (NMF), due to its ability in analyzing complex high-dimensional data by representing data using several bases in a data-driven way. Specifically, a novel multi-view NMF (MV-NMF) method is proposed, in which multiple distinct and complementary cortical properties (i.e., multiple views) are jointly considered to provide comprehensive observation of cortical regionalization process. To ensure the sparsity of the discovered regions, an orthogonal constraint defined in Stiefel manifold is imposed in our MV-NMF method. Meanwhile, a graph-induced constraint is also included to improve the compactness of the discovered regions. Capitalizing on an unprecedentedly large dataset with 1,560 longitudinal MRI scans from 887 infants, we delineate the first neurobiologically meaningful representation of early cortical regionalization, providing a valuable reference for brain development studies.
View Article and Find Full Text PDF

Download full-text PDF

Source
http://dx.doi.org/10.1007/978-3-030-32245-8_93DOI Listing
http://www.ncbi.nlm.nih.gov/pmc/articles/PMC7079741PMC
October 2019

A Longitudinal MRI Study of Amygdala and Hippocampal Subfields for Infants with Risk of Autism.

Graph Learn Med Imaging (2019) 2019 Oct 14;11849:164-171. Epub 2019 Nov 14.

Department of Radiology and Biomedical Research Imaging Center, University of North Carolina at Chapel Hill, Chapel Hill, NC 27599, USA.

Currently, there are still no early biomarkers to detect infants with risk of autism spectrum disorder (ASD), which is mainly diagnosed based on behavioral observations at three or four years of age. Since intervention efforts may miss a critical developmental window after 2 years old, it is clinically significant to identify imaging-based biomarkers at an early stage for better intervention, before behavioral diagnostic signs of ASD typically arising. Previous studies on older children and young adults with ASD demonstrate altered developmental trajectories of the amygdala and hippocampus. However, our knowledge on their developmental trajectories in early postnatal stages remains very limited. In this paper, for the first time, we propose a volume-based analysis of the amygdala and hippocampal subfields of the infant subjects with risk of ASD at 6, 12, and 24 months of age. To address the challenge of low tissue contrast and small structural size of infant amygdala and hippocampal subfields, we propose a novel deep-learning approach, dilated-dense U-Net, to digitally segment the amygdala and hippocampal subfields in a longitudinal dataset, the National Database for Autism Research (NDAR). A volume-based analysis is then performed based on the segmentation results. Our study shows that the overgrowth of amygdala and cornu ammonis sectors (CA) 1-3 May start from 6 months of age, which may be related to the emergence of autistic spectrum disorder.
View Article and Find Full Text PDF

Download full-text PDF

Source
http://dx.doi.org/10.1007/978-3-030-35817-4_20DOI Listing
http://www.ncbi.nlm.nih.gov/pmc/articles/PMC7043018PMC
October 2019

Deep Multi-Scale Mesh Feature Learning for Automated Labeling of Raw Dental Surfaces From 3D Intraoral Scanners.

IEEE Trans Med Imaging 2020 07 5;39(7):2440-2450. Epub 2020 Feb 5.

Precisely labeling teeth on digitalized 3D dental surface models is the precondition for tooth position rearrangements in orthodontic treatment planning. However, it is a challenging task primarily due to the abnormal and varying appearance of patients' teeth. The emerging utilization of intraoral scanners (IOSs) in clinics further increases the difficulty in automated tooth labeling, as the raw surfaces acquired by IOS are typically low-quality at gingival and deep intraoral regions. In recent years, some pioneering end-to-end methods (e.g., PointNet) have been proposed in the communities of computer vision and graphics to consume directly raw surface for 3D shape segmentation. Although these methods are potentially applicable to our task, most of them fail to capture fine-grained local geometric context that is critical to the identification of small teeth with varying shapes and appearances. In this paper, we propose an end-to-end deep-learning method, called MeshSegNet, for automated tooth labeling on raw dental surfaces. Using multiple raw surface attributes as inputs, MeshSegNet integrates a series of graph-constrained learning modules along its forward path to hierarchically extract multi-scale local contextual features. Then, a dense fusion strategy is applied to combine local-to-global geometric features for the learning of higher-level features for mesh cell annotation. The predictions produced by our MeshSegNet are further post-processed by a graph-cut refinement step for final segmentation. We evaluated MeshSegNet using a real-patient dataset consisting of raw maxillary surfaces acquired by 3D IOS. Experimental results, performed 5-fold cross-validation, demonstrate that MeshSegNet significantly outperforms state-of-the-art deep learning methods for 3D shape segmentation.
View Article and Find Full Text PDF

Download full-text PDF

Source
http://dx.doi.org/10.1109/TMI.2020.2971730DOI Listing
July 2020

Iterative Label Denoising Network: Segmenting Male Pelvic Organs in CT From 3D Bounding Box Annotations.

IEEE Trans Biomed Eng 2020 10 27;67(10):2710-2720. Epub 2020 Jan 27.

Obtaining accurate segmentation of the prostate and nearby organs at risk (e.g., bladder and rectum) in CT images is critical for radiotherapy of prostate cancer. Currently, the leading automatic segmentation algorithms are based on Fully Convolutional Networks (FCNs), which achieve remarkable performance but usually need large-scale datasets with high-quality voxel-wise annotations for full supervision of the training. Unfortunately, such annotations are difficult to acquire, which becomes a bottleneck to build accurate segmentation models in real clinical applications. In this paper, we propose a novel weakly supervised segmentation approach that only needs 3D bounding box annotations covering the organs of interest to start the training. Obviously, the bounding box includes many non-organ voxels that carry noisy labels to mislead the segmentation model. To this end, we propose the label denoising module and embed it into the iterative training scheme of the label denoising network (LDnet) for segmentation. The labels of the training voxels are predicted by the tentative LDnet, while the label denoising module identifies the voxels with unreliable labels. As only the good training voxels are preserved, the iteratively re-trained LDnet can refine its segmentation capability gradually. Our results are remarkable, i.e., reaching  ∼ 94% (prostate),  ∼ 91% (bladder), and  ∼ 86% (rectum) of the Dice Similarity Coefficients (DSCs), compared to the case of fully supervised learning upon high-quality voxel-wise annotations and also superior to several state-of-the-art approaches. To our best knowledge, this is the first work to achieve voxel-wise segmentation in CT images from simple 3D bounding box annotations, which can greatly reduce many labeling efforts and meet the demands of the practical clinical applications.
View Article and Find Full Text PDF

Download full-text PDF

Source
http://dx.doi.org/10.1109/TBME.2020.2969608DOI Listing
http://www.ncbi.nlm.nih.gov/pmc/articles/PMC8195631PMC
October 2020

Treatment Outcome Prediction for Cancer Patients based on Radiomics and Belief Function Theory.

IEEE Trans Radiat Plasma Med Sci 2019 Mar 27;3(2):216-224. Epub 2018 Sep 27.

Department of Radiation Oncology, Washington University, Saint louis, MO 63110 USA.

In this study, we proposed a new radiomics-based treatment outcome prediction model for cancer patients. The prediction model is developed based on belief function theory (BFT) and sparsity learning to address the challenges of redundancy, heterogeneity, and uncertainty of radiomic features, and relatively small-sized and unbalanced training samples. The model first selects the most predictive feature subsets from relatively large amounts of radiomic features extracted from pre- and/or in-treatment positron emission tomography (PET) images and available clinical and demographic features. Then an evidential k-nearest neighbor (EK-NN) classifier is proposed to utilize the selected features for treatment outcome prediction. Twenty-five stage II-III lung, 36 esophagus, 63 stage II-III cervix, and 45 lymphoma cancer patient cases were included in this retrospective study. Performance and robustness of the proposed model were assessed with measures of feature selection stability, outcome prediction accuracy, and receiver operating characteristics (ROC) analysis. Comparison with other methods were conducted to demonstrate the feasibility and superior performance of the proposed model.
View Article and Find Full Text PDF

Download full-text PDF

Source
http://dx.doi.org/10.1109/TRPMS.2018.2872406DOI Listing
http://www.ncbi.nlm.nih.gov/pmc/articles/PMC6941853PMC
March 2019

Spatial-Temporal Dependency Modeling and Network Hub Detection for Functional MRI Analysis via Convolutional-Recurrent Network.

IEEE Trans Biomed Eng 2020 08 6;67(8):2241-2252. Epub 2019 Dec 6.

Early identification of dementia at the stage of mild cognitive impairment (MCI) is crucial for timely diagnosis and intervention of Alzheimer's disease (AD). Although several pioneering studies have been devoted to automated AD diagnosis based on resting-state functional magnetic resonance imaging (rs-fMRI), their performance is somewhat limited due to non-effective mining of spatial-temporal dependency. Besides, few of these existing approaches consider the explicit detection and modeling of discriminative brain regions (i.e., network hubs) that are sensitive to AD progression. In this paper, we propose a unique Spatial-Temporal convolutional-recurrent neural Network (STNet) for automated prediction of AD progression and network hub detection from rs-fMRI time series. Our STNet incorporates the spatial-temporal information mining and AD-related hub detection into an end-to-end deep learning model. Specifically, we first partition rs-fMRI time series into a sequence of overlapping sliding windows. A sequence of convolutional components are then designed to capture the local-to-global spatially-dependent patterns within each sliding window, based on which we are able to identify discriminative hubs and characterize their unique contributions to disease diagnosis. A recurrent component with long short-term memory (LSTM) units is further employed to model the whole-brain temporal dependency from the spatially-dependent pattern sequences, thus capturing the temporal dynamics along time. We evaluate the proposed method on 174 subjects with 563 rs-fMRI scans from the Alzheimer's Disease Neuroimaging Initiative (ADNI) database, with results suggesting the effectiveness of our method in both tasks of disease progression prediction and AD-related hub detection.
View Article and Find Full Text PDF

Download full-text PDF

Source
http://dx.doi.org/10.1109/TBME.2019.2957921DOI Listing
http://www.ncbi.nlm.nih.gov/pmc/articles/PMC7439279PMC
August 2020

Automated detection and classification of thyroid nodules in ultrasound images using clinical-knowledge-guided convolutional neural networks.

Med Image Anal 2019 12 5;58:101555. Epub 2019 Sep 5.

Department of Radiology and BRIC, University of North Carolina at Chapel Hill, Chapel Hill, North Carolina 27599, USA; Department of Brain and Cognitive Engineering, Korea University, Seoul 02841, Republic of Korea. Electronic address:

Accurate diagnosis of thyroid nodules using ultrasonography is a valuable but tough task even for experienced radiologists, considering both benign and malignant nodules have heterogeneous appearances. Computer-aided diagnosis (CAD) methods could potentially provide objective suggestions to assist radiologists. However, the performance of existing learning-based approaches is still limited, for direct application of general learning models often ignores critical domain knowledge related to the specific nodule diagnosis. In this study, we propose a novel deep-learning-based CAD system, guided by task-specific prior knowledge, for automated nodule detection and classification in ultrasound images. Our proposed CAD system consists of two stages. First, a multi-scale region-based detection network is designed to learn pyramidal features for detecting nodules at different feature scales. The region proposals are constrained by the prior knowledge about size and shape distributions of real nodules. Then, a multi-branch classification network is proposed to integrate multi-view diagnosis-oriented features, in which each network branch captures and enhances one specific group of characteristics that were generally used by radiologists. We evaluated and compared our method with the state-of-the-art CAD methods and experienced radiologists on two datasets, i.e. Dataset I and Dataset II. The detection and diagnostic accuracy on Dataset I were 97.5% and 97.1%, respectively. Besides, our CAD system also achieved better performance than experienced radiologists on Dataset II, with improvements of accuracy for 8%. The experimental results demonstrate that our proposed method is effective in the discrimination of thyroid nodules.
View Article and Find Full Text PDF

Download full-text PDF

Source
http://dx.doi.org/10.1016/j.media.2019.101555DOI Listing
December 2019

One-Shot Generative Adversarial Learning for MRI Segmentation of Craniomaxillofacial Bony Structures.

IEEE Trans Med Imaging 2020 03 14;39(3):787-796. Epub 2019 Aug 14.

Compared to computed tomography (CT), magnetic resonance imaging (MRI) delineation of craniomaxillofacial (CMF) bony structures can avoid harmful radiation exposure. However, bony boundaries are blurry in MRI, and structural information needs to be borrowed from CT during the training. This is challenging since paired MRI-CT data are typically scarce. In this paper, we propose to make full use of unpaired data, which are typically abundant, along with a single paired MRI-CT data to construct a one-shot generative adversarial model for automated MRI segmentation of CMF bony structures. Our model consists of a cross-modality image synthesis sub-network, which learns the mapping between CT and MRI, and an MRI segmentation sub-network. These two sub-networks are trained jointly in an end-to-end manner. Moreover, in the training phase, a neighbor-based anchoring method is proposed to reduce the ambiguity problem inherent in cross-modality synthesis, and a feature-matching-based semantic consistency constraint is proposed to encourage segmentation-oriented MRI synthesis. Experimental results demonstrate the superiority of our method both qualitatively and quantitatively in comparison with the state-of-the-art MRI segmentation methods.
View Article and Find Full Text PDF

Download full-text PDF

Source
http://dx.doi.org/10.1109/TMI.2019.2935409DOI Listing
http://www.ncbi.nlm.nih.gov/pmc/articles/PMC7219540PMC
March 2020

Developmental topography of cortical thickness during infancy.

Proc Natl Acad Sci U S A 2019 08 22;116(32):15855-15860. Epub 2019 Jul 22.

Department of Radiology, University of North Carolina at Chapel Hill, Chapel Hill, NC 27599;

During the first 2 postnatal years, cortical thickness of the human brain develops dynamically and spatially heterogeneously and likely peaks between 1 and 2 y of age. The striking development renders this period critical for later cognitive outcomes and vulnerable to early neurodevelopmental disorders. However, due to the difficulties in longitudinal infant brain MRI acquisition and processing, our knowledge still remains limited on the dynamic changes, peak age, and spatial heterogeneities of cortical thickness during infancy. To fill this knowledge gap, in this study, we discover the developmental regionalization of cortical thickness, i.e., developmentally distinct regions, each of which is composed of a set of codeveloping cortical vertices, for better understanding of the spatiotemporal heterogeneities of cortical thickness development. We leverage an infant-dedicated computational pipeline, an advanced multivariate analysis method (i.e., nonnegative matrix factorization), and a densely sampled longitudinal dataset with 210 serial MRI scans from 43 healthy infants, with each infant being scheduled to have 7 longitudinal scans at around 1, 3, 6, 9, 12, 18, and 24 mo of age. Our results suggest that, during the first 2 y, the whole-brain average cortical thickness increases rapidly and reaches a plateau at about 14 mo of age and then decreases at a slow pace thereafter. More importantly, each discovered region is structurally and functionally meaningful and exhibits a distinctive developmental pattern, with several regions peaking at varied ages while others keep increasing in the first 2 postnatal years. Our findings provide valuable references and insights for early brain development.
View Article and Find Full Text PDF

Download full-text PDF

Source
http://dx.doi.org/10.1073/pnas.1821523116DOI Listing
http://www.ncbi.nlm.nih.gov/pmc/articles/PMC6689940PMC
August 2019

Topological correction of infant white matter surfaces using anatomically constrained convolutional neural network.

Neuroimage 2019 09 18;198:114-124. Epub 2019 May 18.

Department of Radiology and BRIC, University of North Carolina at Chapel Hill, North Carolina, 27599, USA. Electronic address:

Reconstruction of accurate cortical surfaces without topological errors (i.e., handles and holes) from infant brain MR images is very important in early brain development studies. However, infant brain MR images typically suffer extremely low tissue contrast and dynamic imaging appearance patterns. Thus, it is inevitable to have large amounts of topological errors in the segmented infant brain tissue images, which lead to inaccurately reconstructed cortical surfaces with topological errors. To address this issue, inspired by recent advances in deep learning, we propose an anatomically constrained network for topological correction on infant cortical surfaces. Specifically, in our method, we first locate regions of potential topological defects by leveraging a topology-preserving level set method. Then, we propose an anatomically constrained network to correct those candidate voxels in the located regions. Since infant cortical surfaces often contain large and complex handles or holes, it is difficult to completely correct all errors using one-shot correction. Therefore, we further enroll these two steps into an iterative framework to gradually correct large topological errors. To the best of our knowledge, this is the first work to introduce deep learning approach for topological correction of infant cortical surfaces. We compare our method with the state-of-the-art methods on both simulated topological errors and real topological errors in human infant brain MR images. Moreover, we also validate our method on the infant brain MR images of macaques. All experimental results show the superior performance of the proposed method.
View Article and Find Full Text PDF

Download full-text PDF

Source
http://dx.doi.org/10.1016/j.neuroimage.2019.05.037DOI Listing
http://www.ncbi.nlm.nih.gov/pmc/articles/PMC6602545PMC
September 2019

Weakly Supervised Deep Learning for Brain Disease Prognosis Using MRI and Incomplete Clinical Scores.

IEEE Trans Cybern 2020 Jul 26;50(7):3381-3392. Epub 2019 Mar 26.

As a hot topic in brain disease prognosis, predicting clinical measures of subjects based on brain magnetic resonance imaging (MRI) data helps to assess the stage of pathology and predict future development of the disease. Due to incomplete clinical labels/scores, previous learning-based studies often simply discard subjects without ground-truth scores. This would result in limited training data for learning reliable and robust models. Also, existing methods focus only on using hand-crafted features (e.g., image intensity or tissue volume) of MRI data, and these features may not be well coordinated with prediction models. In this paper, we propose a weakly supervised densely connected neural network (wiseDNN) for brain disease prognosis using baseline MRI data and incomplete clinical scores. Specifically, we first extract multiscale image patches (located by anatomical landmarks) from MRI to capture local-to-global structural information of images, and then develop a weakly supervised densely connected network for task-oriented extraction of imaging features and joint prediction of multiple clinical measures. A weighted loss function is further employed to make full use of all available subjects (even those without ground-truth scores at certain time-points) for network training. The experimental results on 1469 subjects from both ADNI-1 and ADNI-2 datasets demonstrate that our proposed method can efficiently predict future clinical measures of subjects.
View Article and Find Full Text PDF

Download full-text PDF

Source
http://dx.doi.org/10.1109/TCYB.2019.2904186DOI Listing
http://www.ncbi.nlm.nih.gov/pmc/articles/PMC8034591PMC
July 2020