Publications by authors named "Bjoern Menze"

100 Publications

A computed tomography vertebral segmentation dataset with anatomical variations and multi-vendor scanner data.

Sci Data 2021 10 28;8(1):284. Epub 2021 Oct 28.

Department of Diagnostic and Interventional Neuroradiology, School of Medicine, Klinikum rechts der Isar, Technical University of Munich, Munich, Germany.

With the advent of deep learning algorithms, fully automated radiological image analysis is within reach. In spine imaging, several atlas- and shape-based as well as deep learning segmentation algorithms have been proposed, allowing for subsequent automated analysis of morphology and pathology. The first "Large Scale Vertebrae Segmentation Challenge" (VerSe 2019) showed that these perform well on normal anatomy, but fail in variants not frequently present in the training dataset. Building on that experience, we report on the largely increased VerSe 2020 dataset and results from the second iteration of the VerSe challenge (MICCAI 2020, Lima, Peru). VerSe 2020 comprises annotated spine computed tomography (CT) images from 300 subjects with 4142 fully visualized and annotated vertebrae, collected across multiple centres from four different scanner manufacturers, enriched with cases that exhibit anatomical variants such as enumeration abnormalities (n = 77) and transitional vertebrae (n = 161). Metadata includes vertebral labelling information, voxel-level segmentation masks obtained with a human-machine hybrid algorithm and anatomical ratings, to enable the development and benchmarking of robust and accurate segmentation algorithms.
View Article and Find Full Text PDF

Download full-text PDF

Source
http://dx.doi.org/10.1038/s41597-021-01060-0DOI Listing
http://www.ncbi.nlm.nih.gov/pmc/articles/PMC8553749PMC
October 2021

Face Restoration via Plug-and-Play 3D Facial Priors.

IEEE Trans Pattern Anal Mach Intell 2021 Oct 27;PP. Epub 2021 Oct 27.

State-of-the-art face restoration methods employ deep convolutional neural networks (CNNs) to learn a mapping between degraded and sharp facial patterns by exploring local appearance knowledge. However, most of these methods do not well exploit facial structures and identity information, and only deal with task-specific face restoration (e.g.,face super-resolution or deblurring). In this paper, we propose cross-tasks and cross-models plug-and-play 3D facial priors to explicitly embed the network with the sharp facial structures for general face restoration tasks. Our 3D priors are the first to explore 3D morphable knowledge based on the fusion of parametric descriptions of face attributes (e.g., identity, facial expression, texture, illumination, and face pose). Furthermore, the priors can easily be incorporated into any network and are very efficient in improving the performance and accelerating the convergence speed. Firstly, a 3D face rendering branch is set up to obtain 3D priors of salient facial structures and identity knowledge. Secondly, for better exploiting this hierarchical information (i.e., intensity similarity, 3D facial structure, and identity content), a spatial attention module is designed for image restoration problems. Extensive face restoration experiments including face super-resolution and deblurring demonstrate that the proposed 3D priors achieve superior face restoration results over the state-of-the-art algorithms.
View Article and Find Full Text PDF

Download full-text PDF

Source
http://dx.doi.org/10.1109/TPAMI.2021.3123085DOI Listing
October 2021

Automated detection of the contrast phase in MDCT by an artificial neural network improves the accuracy of opportunistic bone mineral density measurements.

Eur Radiol 2021 Oct 23. Epub 2021 Oct 23.

Department of Neuroradiology, School of Medicine, Klinikum rechts der Isar, Technical University of Munich, Ismaninger Str 22, 81675, Munich, Germany.

Objectives: To determine the accuracy of an artificial neural network (ANN) for fully automated detection of the presence and phase of iodinated contrast agent in routine abdominal multidetector computed tomography (MDCT) scans and evaluate the effect of contrast correction for osteoporosis screening.

Methods: This HIPPA-compliant study retrospectively included 579 MDCT scans in 193 patients (62.4 ± 14.6 years, 48 women). Three different ANN models (2D DenseNet with random slice selection, 2D DenseNet with anatomy-guided slice selection, 3D DenseNet) were trained in 462 MDCT scans of 154 patients (threefold cross-validation), who underwent triphasic CT. All ANN models were tested in 117 unseen triphasic scans of 39 patients, as well as in a public MDCT dataset containing 311 patients. In the triphasic test scans, trabecular volumetric bone mineral density (BMD) was calculated using a fully automated pipeline. Root-mean-square errors (RMSE) of BMD measurements with and without correction for contrast application were calculated in comparison to nonenhanced (NE) scans.

Results: The 2D DenseNet with anatomy-guided slice selection outperformed the competing models and achieved an F1 score of 0.98 and an accuracy of 98.3% in the test set (public dataset: F1 score 0.93; accuracy 94.2%). Application of contrast agent resulted in significant BMD biases (all p < .001; portal-venous (PV): RMSE 18.7 mg/ml, mean difference 17.5 mg/ml; arterial (AR): RMSE 6.92 mg/ml, mean difference 5.68 mg/ml). After the fully automated correction, this bias was no longer significant (p > .05; PV: RMSE 9.45 mg/ml, mean difference 1.28 mg/ml; AR: RMSE 3.98 mg/ml, mean difference 0.94 mg/ml).

Conclusion: Automatic detection of the contrast phase in multicenter CT data was achieved with high accuracy, minimizing the contrast-induced error in BMD measurements.

Key Points: • A 2D DenseNet with anatomy-guided slice selection achieved an F1 score of 0.98 and an accuracy of 98.3% in the test set. In a public dataset, an F1 score of 0.93 and an accuracy of 94.2% were obtained. • Automated adjustment for contrast injection improved the accuracy of lumbar bone mineral density measurements (RMSE 18.7 mg/ml vs. 9.45 mg/ml respectively, in the portal-venous phase). • An artificial neural network can reliably reveal the presence and phase of iodinated contrast agent in multidetector CT scans ( https://github.com/ferchonavarro/anatomy_guided_contrast_c ). This allows minimizing the contrast-induced error in opportunistic bone mineral density measurements.
View Article and Find Full Text PDF

Download full-text PDF

Source
http://dx.doi.org/10.1007/s00330-021-08284-zDOI Listing
October 2021

Automated claustrum segmentation in human brain MRI using deep learning.

Hum Brain Mapp 2021 Dec 14;42(18):5862-5872. Epub 2021 Sep 14.

TUM-NIC Neuroimaging Center, Munich, Germany.

In the last two decades, neuroscience has produced intriguing evidence for a central role of the claustrum in mammalian forebrain structure and function. However, relatively few in vivo studies of the claustrum exist in humans. A reason for this may be the delicate and sheet-like structure of the claustrum lying between the insular cortex and the putamen, which makes it not amenable to conventional segmentation methods. Recently, Deep Learning (DL) based approaches have been successfully introduced for automated segmentation of complex, subcortical brain structures. In the following, we present a multi-view DL-based approach to segment the claustrum in T1-weighted MRI scans. We trained and evaluated the proposed method in 181 individuals, using bilateral manual claustrum annotations by an expert neuroradiologist as reference standard. Cross-validation experiments yielded median volumetric similarity, robust Hausdorff distance, and Dice score of 93.3%, 1.41 mm, and 71.8%, respectively, representing equal or superior segmentation performance compared to human intra-rater reliability. The leave-one-scanner-out evaluation showed good transferability of the algorithm to images from unseen scanners at slightly inferior performance. Furthermore, we found that DL-based claustrum segmentation benefits from multi-view information and requires a sample size of around 75 MRI scans in the training set. We conclude that the developed algorithm allows for robust automated claustrum segmentation and thus yields considerable potential for facilitating MRI-based research of the human claustrum. The software and models of our method are made publicly available.
View Article and Find Full Text PDF

Download full-text PDF

Source
http://dx.doi.org/10.1002/hbm.25655DOI Listing
http://www.ncbi.nlm.nih.gov/pmc/articles/PMC8596988PMC
December 2021

AIFNet: Automatic vascular function estimation for perfusion analysis using deep learning.

Med Image Anal 2021 Dec 6;74:102211. Epub 2021 Aug 6.

icometrix, Leuven, Belgium; Medical Imaging Research Center (MIRC), KU Leuven, Leuven, Belgium; Medical Image Computing (MIC), ESAT-PSI, Department of Electrical Engineering, KU Leuven, Leuven, Belgium.

Perfusion imaging is crucial in acute ischemic stroke for quantifying the salvageable penumbra and irreversibly damaged core lesions. As such, it helps clinicians to decide on the optimal reperfusion treatment. In perfusion CT imaging, deconvolution methods are used to obtain clinically interpretable perfusion parameters that allow identifying brain tissue abnormalities. Deconvolution methods require the selection of two reference vascular functions as inputs to the model: the arterial input function (AIF) and the venous output function, with the AIF as the most critical model input. When manually performed, the vascular function selection is time demanding, suffers from poor reproducibility and is subject to the professionals' experience. This leads to potentially unreliable quantification of the penumbra and core lesions and, hence, might harm the treatment decision process. In this work we automatize the perfusion analysis with AIFNet, a fully automatic and end-to-end trainable deep learning approach for estimating the vascular functions. Unlike previous methods using clustering or segmentation techniques to select vascular voxels, AIFNet is directly optimized at the vascular function estimation, which allows to better recognise the time-curve profiles. Validation on the public ISLES18 stroke database shows that AIFNet almost reaches inter-rater performance for the vascular function estimation and, subsequently, for the parameter maps and core lesion quantification obtained through deconvolution. We conclude that AIFNet has potential for clinical transfer and could be incorporated in perfusion deconvolution software.
View Article and Find Full Text PDF

Download full-text PDF

Source
http://dx.doi.org/10.1016/j.media.2021.102211DOI Listing
December 2021

An automatic multi-tissue human fetal brain segmentation benchmark using the Fetal Tissue Annotation Dataset.

Sci Data 2021 07 6;8(1):167. Epub 2021 Jul 6.

Center for MR Research, University Children's Hospital Zurich, University of Zurich, Zurich, Switzerland.

It is critical to quantitatively analyse the developing human fetal brain in order to fully understand neurodevelopment in both normal fetuses and those with congenital disorders. To facilitate this analysis, automatic multi-tissue fetal brain segmentation algorithms are needed, which in turn requires open datasets of segmented fetal brains. Here we introduce a publicly available dataset of 50 manually segmented pathological and non-pathological fetal magnetic resonance brain volume reconstructions across a range of gestational ages (20 to 33 weeks) into 7 different tissue categories (external cerebrospinal fluid, grey matter, white matter, ventricles, cerebellum, deep grey matter, brainstem/spinal cord). In addition, we quantitatively evaluate the accuracy of several automatic multi-tissue segmentation algorithms of the developing human fetal brain. Four research groups participated, submitting a total of 10 algorithms, demonstrating the benefits the dataset for the development of automatic algorithms.
View Article and Find Full Text PDF

Download full-text PDF

Source
http://dx.doi.org/10.1038/s41597-021-00946-3DOI Listing
http://www.ncbi.nlm.nih.gov/pmc/articles/PMC8260784PMC
July 2021

Development and External Validation of Deep-Learning-Based Tumor Grading Models in Soft-Tissue Sarcoma Patients Using MR Imaging.

Cancers (Basel) 2021 Jun 8;13(12). Epub 2021 Jun 8.

Department of Radiation Oncology, Klinikum Rechts der Isar, Technical University of Munich (TUM), Ismaninger Straße 22, 81675 Munich, Germany.

Background: In patients with soft-tissue sarcomas, tumor grading constitutes a decisive factor to determine the best treatment decision. Tumor grading is obtained by pathological work-up after focal biopsies. Deep learning (DL)-based imaging analysis may pose an alternative way to characterize STS tissue. In this work, we sought to non-invasively differentiate tumor grading into low-grade (G1) and high-grade (G2/G3) STS using DL techniques based on MR-imaging.

Methods: Contrast-enhanced T1-weighted fat-saturated (T1FSGd) MRI sequences and fat-saturated T2-weighted (T2FS) sequences were collected from two independent retrospective cohorts (training: 148 patients, testing: 158 patients). Tumor grading was determined following the French Federation of Cancer Centers Sarcoma Group in pre-therapeutic biopsies. DL models were developed using transfer learning based on the DenseNet 161 architecture.

Results: The T1FSGd and T2FS-based DL models achieved area under the receiver operator characteristic curve (AUC) values of 0.75 and 0.76 on the test cohort, respectively. T1FSGd achieved the best F1-score of all models (0.90). The T2FS-based DL model was able to significantly risk-stratify for overall survival. Attention maps revealed relevant features within the tumor volume and in border regions.

Conclusions: MRI-based DL models are capable of predicting tumor grading with good reproducibility in external validation.
View Article and Find Full Text PDF

Download full-text PDF

Source
http://dx.doi.org/10.3390/cancers13122866DOI Listing
http://www.ncbi.nlm.nih.gov/pmc/articles/PMC8227009PMC
June 2021

Multi-Centre, Multi-Vendor and Multi-Disease Cardiac Segmentation: The M&Ms Challenge.

IEEE Trans Med Imaging 2021 12 30;40(12):3543-3554. Epub 2021 Nov 30.

The emergence of deep learning has considerably advanced the state-of-the-art in cardiac magnetic resonance (CMR) segmentation. Many techniques have been proposed over the last few years, bringing the accuracy of automated segmentation close to human performance. However, these models have been all too often trained and validated using cardiac imaging samples from single clinical centres or homogeneous imaging protocols. This has prevented the development and validation of models that are generalizable across different clinical centres, imaging conditions or scanner vendors. To promote further research and scientific benchmarking in the field of generalizable deep learning for cardiac segmentation, this paper presents the results of the Multi-Centre, Multi-Vendor and Multi-Disease Cardiac Segmentation (M&Ms) Challenge, which was recently organized as part of the MICCAI 2020 Conference. A total of 14 teams submitted different solutions to the problem, combining various baseline models, data augmentation strategies, and domain adaptation techniques. The obtained results indicate the importance of intensity-driven data augmentation, as well as the need for further research to improve generalizability towards unseen scanner vendors or new imaging protocols. Furthermore, we present a new resource of 375 heterogeneous CMR datasets acquired by using four different scanner vendors in six hospitals and three different countries (Spain, Canada and Germany), which we provide as open-access for the community to enable future research in the field.
View Article and Find Full Text PDF

Download full-text PDF

Source
http://dx.doi.org/10.1109/TMI.2021.3090082DOI Listing
December 2021

Comparing methods of detecting and segmenting unruptured intracranial aneurysms on TOF-MRAS: The ADAM challenge.

Neuroimage 2021 09 27;238:118216. Epub 2021 May 27.

Department of Informatics, Technische Universität München, Munich, Germany.

Accurate detection and quantification of unruptured intracranial aneurysms (UIAs) is important for rupture risk assessment and to allow an informed treatment decision to be made. Currently, 2D manual measures used to assess UIAs on Time-of-Flight magnetic resonance angiographies (TOF-MRAs) lack 3D information and there is substantial inter-observer variability for both aneurysm detection and assessment of aneurysm size and growth. 3D measures could be helpful to improve aneurysm detection and quantification but are time-consuming and would therefore benefit from a reliable automatic UIA detection and segmentation method. The Aneurysm Detection and segMentation (ADAM) challenge was organised in which methods for automatic UIA detection and segmentation were developed and submitted to be evaluated on a diverse clinical TOF-MRA dataset. A training set (113 cases with a total of 129 UIAs) was released, each case including a TOF-MRA, a structural MR image (T1, T2 or FLAIR), annotation of any present UIA(s) and the centre voxel of the UIA(s). A test set of 141 cases (with 153 UIAs) was used for evaluation. Two tasks were proposed: (1) detection and (2) segmentation of UIAs on TOF-MRAs. Teams developed and submitted containerised methods to be evaluated on the test set. Task 1 was evaluated using metrics of sensitivity and false positive count. Task 2 was evaluated using dice similarity coefficient, modified hausdorff distance (95 percentile) and volumetric similarity. For each task, a ranking was made based on the average of the metrics. In total, eleven teams participated in task 1 and nine of those teams participated in task 2. Task 1 was won by a method specifically designed for the detection task (i.e. not participating in task 2). Based on segmentation metrics, the top two methods for task 2 performed statistically significantly better than all other methods. The detection performance of the top-ranking methods was comparable to visual inspection for larger aneurysms. Segmentation performance of the top ranking method, after selection of true UIAs, was similar to interobserver performance. The ADAM challenge remains open for future submissions and improved submissions, with a live leaderboard to provide benchmarking for method developments at https://adam.isi.uu.nl/.
View Article and Find Full Text PDF

Download full-text PDF

Source
http://dx.doi.org/10.1016/j.neuroimage.2021.118216DOI Listing
September 2021

Tumor sink effect in Ga-PSMA-11 PET: Myth or Reality?

J Nucl Med 2021 May 28. Epub 2021 May 28.

Technical University Munich, Germany.

We aimed to systematically determine the impact of tumor burden on the Ga-prostate-specific membrane antigen-11 (Ga-PSMA) PET biodistribution by the use of quantitative measurements. This international multicenter retrospective analysis included 406 men with prostate cancer who received Ga-PSMA PET/CT. Of these, 356 had positive findings and were stratified by quintiles into very low (Q1, ≤25 ml), low (Q2, 25-189 ml), moderate (Q3, 189-532 ml), high (Q4, 532-1355 ml) and very high (Q5, ≥1355 ml) total PSMA-positive tumor volume (PSMA-VOL). PSMA-VOL was obtained by semi-automatic segmentation of total tumor lesions using qPSMA software. Fifty prostate cancer patients with no PSMA-positive lesions (negative scan) served as control group. Normal organs, which included salivary glands, liver, spleen and kidneys, were semi-automatically segmented using Ga-PSMA PET images and average SUV (SUVmean) was obtained. Correlations of PSMA-VOL as continuous and as categorical variable by quintiles with SUVmean of normal organ were evaluated. The median PSMA-VOL was 302 ml (interquartile range [IQR], 47-1076). The median (IQR) SUVmean of salivary glands, kidneys, liver and spleen was 10.0 (7.7-11.8), 26.0 (20.0-33.4), 3.7 (3.0-4.7) and 5.3 (4.0-7.2), respectively. PSMA-VOL showed a moderate negative correlation with SUVmean of salivary glands (r=-0.44, p<0.001), kidneys (r=-0.34, p<0.001), and liver (r=-0.30, p<0.001) and a weak negative correlation with spleen SUVmean (r=-0.16, = 0.002). Patients with very high PSMA-VOL (Q5, ≥1355 ml) had a significant lower PSMA uptake of salivary glands, kidneys, liver and spleen compared to the control group with an average difference of -38.1%, -40.0%, -43.2% and -34.9%, respectively (p<0.001). Tumor sequestration affects Ga-PSMA biodistribution in normal organs. Patients with very high tumor load showed a significant lower uptake of Ga-PSMA in normal organs confirming a tumor sink effect. As similar effects might occur with PSMA-targeted radioligand therapy, these patients might benefit from increased therapeutic activity without exceeding the radiation dose limit for organs at risk.
View Article and Find Full Text PDF

Download full-text PDF

Source
http://dx.doi.org/10.2967/jnumed.121.261906DOI Listing
May 2021

AutoImplant 2020-First MICCAI Challenge on Automatic Cranial Implant Design.

IEEE Trans Med Imaging 2021 09 31;40(9):2329-2342. Epub 2021 Aug 31.

The aim of this paper is to provide a comprehensive overview of the MICCAI 2020 AutoImplant Challenge. The approaches and publications submitted and accepted within the challenge will be summarized and reported, highlighting common algorithmic trends and algorithmic diversity. Furthermore, the evaluation results will be presented, compared and discussed in regard to the challenge aim: seeking for low cost, fast and fully automated solutions for cranial implant design. Based on feedback from collaborating neurosurgeons, this paper concludes by stating open issues and post-challenge requirements for intra-operative use. The codes can be found at https://github.com/Jianningli/tmi.
View Article and Find Full Text PDF

Download full-text PDF

Source
http://dx.doi.org/10.1109/TMI.2021.3077047DOI Listing
September 2021

Accelerated 3D whole-brain T1, T2, and proton density mapping: feasibility for clinical glioma MR imaging.

Neuroradiology 2021 Nov 9;63(11):1831-1851. Epub 2021 Apr 9.

Radiology & Nuclear Medicine, Erasmus MC, University Medical Center, Rotterdam, Netherlands.

Purpose: Advanced MRI-based biomarkers offer comprehensive and quantitative information for the evaluation and characterization of brain tumors. In this study, we report initial clinical experience in routine glioma imaging with a novel, fully 3D multiparametric quantitative transient-state imaging (QTI) method for tissue characterization based on T1 and T2 values.

Methods: To demonstrate the viability of the proposed 3D QTI technique, nine glioma patients (grade II-IV), with a variety of disease states and treatment histories, were included in this study. First, we investigated the feasibility of 3D QTI (6:25 min scan time) for its use in clinical routine imaging, focusing on image reconstruction, parameter estimation, and contrast-weighted image synthesis. Second, for an initial assessment of 3D QTI-based quantitative MR biomarkers, we performed a ROI-based analysis to characterize T1 and T2 components in tumor and peritumoral tissue.

Results: The 3D acquisition combined with a compressed sensing reconstruction and neural network-based parameter inference produced parametric maps with high isotropic resolution (1.125 × 1.125 × 1.125 mm voxel size) and whole-brain coverage (22.5 × 22.5 × 22.5 cm FOV), enabling the synthesis of clinically relevant T1-weighted, T2-weighted, and FLAIR contrasts without any extra scan time. Our study revealed increased T1 and T2 values in tumor and peritumoral regions compared to contralateral white matter, good agreement with healthy volunteer data, and high inter-subject consistency.

Conclusion: 3D QTI demonstrated comprehensive tissue assessment of tumor substructures captured in T1 and T2 parameters. Aiming for fast acquisition of quantitative MR biomarkers, 3D QTI has potential to improve disease characterization in brain tumor patients under tight clinical time-constraints.
View Article and Find Full Text PDF

Download full-text PDF

Source
http://dx.doi.org/10.1007/s00234-021-02703-0DOI Listing
http://www.ncbi.nlm.nih.gov/pmc/articles/PMC8528802PMC
November 2021

Analyzing Longitudinal wb-MRI Data and Clinical Course in a Cohort of Former Smoldering Multiple Myeloma Patients: Connections between MRI Findings and Clinical Progression Patterns.

Cancers (Basel) 2021 Feb 25;13(5). Epub 2021 Feb 25.

Diagnostic and Interventional Radiology, University Hospital Heidelberg, Im Neuenheimer Feld 110, 69120 Heidelberg, Germany.

The purpose of this study was to analyze size and growth dynamics of focal lesions (FL) as well as to quantify diffuse infiltration (DI) in untreated smoldering multiple myeloma (SMM) patients and correlate those MRI features with timepoint and cause of progression. We investigated 199 whole-body magnetic resonance imaging (wb-MRI) scans originating from longitudinal imaging of 60 SMM patients and 39 computed tomography (CT) scans for corresponding osteolytic lesions (OL) in 17 patients. All FLs >5 mm were manually segmented to quantify volume and growth dynamics, and DI was scored, rating four compartments separately in T1- and fat-saturated T2-weighted images. The majority of patients with at least two FLs showed substantial spatial heterogeneity in growth dynamics. The volume of the largest FL ( = 0.001, c-index 0.72), the speed of growth of the fastest growing FL ( = 0.003, c-index 0.75), the DI score (DIS, = 0.014, c-index 0.67), and its dynamic over time (DIS dynamic, < 0.001, c-index 0.67) all significantly correlated with the time to progression. Size and growth dynamics of FLs correlated significantly with presence/appearance of OL in CT within 2 years after the respective MRI assessment ( = 0.016 and = 0.022). DIS correlated with decrease of hemoglobin ( < 0.001). In conclusion, size and growth dynamics of FLs correlate with prognosis and local bone destruction. Connections between MRI findings and progression patterns (fast growing FL-OL; DIS-hemoglobin decrease) might enable more precise diagnostic and therapeutic approaches for SMM patients in the future.
View Article and Find Full Text PDF

Download full-text PDF

Source
http://dx.doi.org/10.3390/cancers13050961DOI Listing
http://www.ncbi.nlm.nih.gov/pmc/articles/PMC7956649PMC
February 2021

Weakly supervised deep learning for determining the prognostic value of F-FDG PET/CT in extranodal natural killer/T cell lymphoma, nasal type.

Eur J Nucl Med Mol Imaging 2021 09 20;48(10):3151-3161. Epub 2021 Feb 20.

Department of Informatics, Technical University of Munich, Munich, Germany.

Purpose: To develop a weakly supervised deep learning (WSDL) method that could utilize incomplete/missing survival data to predict the prognosis of extranodal natural killer/T cell lymphoma, nasal type (ENKTL) based on pretreatment F-FDG PET/CT results.

Methods: One hundred and sixty-seven patients with ENKTL who underwent pretreatment F-FDG PET/CT were retrospectively collected. Eighty-four patients were followed up for at least 2 years (training set = 64, test set = 20). A WSDL method was developed to enable the integration of the remaining 83 patients with incomplete/missing follow-up information in the training set. To test generalization, these data were derived from three types of scanners. Prediction similarity index (PSI) was derived from deep learning features of images. Its discriminative ability was calculated and compared with that of a conventional deep learning (CDL) method. Univariate and multivariate analyses helped explore the significance of PSI and clinical features.

Results: PSI achieved area under the curve scores of 0.9858 and 0.9946 (training set) and 0.8750 and 0.7344 (test set) in the prediction of progression-free survival (PFS) with the WSDL and CDL methods, respectively. PSI threshold of 1.0 could significantly differentiate the prognosis. In the test set, WSDL and CDL achieved prediction sensitivity, specificity, and accuracy of 87.50% and 62.50%, 83.33% and 83.33%, and 85.00% and 75.00%, respectively. Multivariate analysis confirmed PSI to be an independent significant predictor of PFS in both the methods.

Conclusion: The WSDL-based framework was more effective for extracting F-FDG PET/CT features and predicting the prognosis of ENKTL than the CDL method.
View Article and Find Full Text PDF

Download full-text PDF

Source
http://dx.doi.org/10.1007/s00259-021-05232-3DOI Listing
http://www.ncbi.nlm.nih.gov/pmc/articles/PMC7896833PMC
September 2021

Analyzing magnetic resonance imaging data from glioma patients using deep learning.

Comput Med Imaging Graph 2021 03 2;88:101828. Epub 2020 Dec 2.

Center for Biomedical Image Computing and Analytics (CBICA), University of Pennsylvania, Philadelphia, PA, USA. Electronic address:

The quantitative analysis of images acquired in the diagnosis and treatment of patients with brain tumors has seen a significant rise in the clinical use of computational tools. The underlying technology to the vast majority of these tools are machine learning methods and, in particular, deep learning algorithms. This review offers clinical background information of key diagnostic biomarkers in the diagnosis of glioma, the most common primary brain tumor. It offers an overview of publicly available resources and datasets for developing new computational tools and image biomarkers, with emphasis on those related to the Multimodal Brain Tumor Segmentation (BraTS) Challenge. We further offer an overview of the state-of-the-art methods in glioma image segmentation, again with an emphasis on publicly available tools and deep learning algorithms that emerged in the context of the BraTS challenge.
View Article and Find Full Text PDF

Download full-text PDF

Source
http://dx.doi.org/10.1016/j.compmedimag.2020.101828DOI Listing
http://www.ncbi.nlm.nih.gov/pmc/articles/PMC8040671PMC
March 2021

Deep learning for medical image analysis: a brief introduction.

Neurooncol Adv 2020 Dec 23;2(Suppl 4):iv35-iv41. Epub 2021 Jan 23.

Department of Informatics, TU Munich, Munich, Germany.

Advances in deep learning have led to the development of neural network algorithms which today rival human performance in vision tasks, such as image classification or segmentation. Translation of these techniques into clinical science has also significantly advanced image analysis in neuro-oncology. This has created a need in the neuro-oncology community for understanding the mechanisms behind neural networks and deep learning, as close interaction of computer scientists and neuro-oncology researchers as well as realistic expectations about the possibilities (and limitations) of the current state-of-the-art is pivotal for successful translation of deep learning techniques into practice. In this review, we will briefly introduce the building blocks of neural networks with a particular focus on convolutional neural networks. We will explain why these networks excel at identifying relevant features and how they learn to associate these imaging features with (clinical) features of interest, such as genotype, or how they automatically segment structures of interest in the image volume. We will also discuss challenges for the more widespread use of these algorithms.
View Article and Find Full Text PDF

Download full-text PDF

Source
http://dx.doi.org/10.1093/noajnl/vdaa092DOI Listing
http://www.ncbi.nlm.nih.gov/pmc/articles/PMC7829473PMC
December 2020

3D Deep Learning Enables Accurate Layer Mapping of 2D Materials.

ACS Nano 2021 Feb 19;15(2):3139-3151. Epub 2021 Jan 19.

Institute for Measurement Systems and Sensor Technology, Department of Electrical and Computer Engineering, Technical University of Munich, 80333 Munich Germany.

Layered, two-dimensional (2D) materials are promising for next-generation photonics devices. Typically, the thickness of mechanically cleaved flakes and chemical vapor deposited thin films is distributed randomly over a large area, where accurate identification of atomic layer numbers is time-consuming. Hyperspectral imaging microscopy yields spectral information that can be used to distinguish the spectral differences of varying thickness specimens. However, its spatial resolution is relatively low due to the spectral imaging nature. In this work, we present a 3D deep learning solution called DALM (deep-learning-enabled atomic layer mapping) to merge hyperspectral reflection images (high spectral resolution) and RGB images (high spatial resolution) for the identification and segmentation of MoS flakes with mono-, bi-, tri-, and multilayer thicknesses. DALM is trained on a small set of labeled images, automatically predicts layer distributions and segments individual layers with high accuracy, and shows robustness to illumination and contrast variations. Further, we show its advantageous performance over the state-of-the-art model that is solely based on RGB microscope images. This AI-supported technique with high speed, spatial resolution, and accuracy allows for reliable computer-aided identification of atomically thin materials.
View Article and Find Full Text PDF

Download full-text PDF

Source
http://dx.doi.org/10.1021/acsnano.0c09685DOI Listing
February 2021

Compressive MRI quantification using convex spatiotemporal priors and deep encoder-decoder networks.

Med Image Anal 2021 04 19;69:101945. Epub 2020 Dec 19.

Computer Science Department, Technical University of Munich, Germany.

We propose a dictionary-matching-free pipeline for multi-parametric quantitative MRI image computing. Our approach has two stages based on compressed sensing reconstruction and deep learned quantitative inference. The reconstruction phase is convex and incorporates efficient spatiotemporal regularisations within an accelerated iterative shrinkage algorithm. This minimises the under-sampling (aliasing) artefacts from aggressively short scan times. The learned quantitative inference phase is purely trained on physical simulations (Bloch equations) that are flexible for producing rich training samples. We propose a deep and compact encoder-decoder network with residual blocks in order to embed Bloch manifold projections through multi-scale piecewise affine approximations, and to replace the non-scalable dictionary-matching baseline. Tested on a number of datasets we demonstrate effectiveness of the proposed scheme for recovering accurate and consistent quantitative information from novel and aggressively subsampled 2D/3D quantitative MRI acquisition protocols.
View Article and Find Full Text PDF

Download full-text PDF

Source
http://dx.doi.org/10.1016/j.media.2020.101945DOI Listing
April 2021

Deep learning-enabled multi-organ segmentation in whole-body mouse scans.

Nat Commun 2020 11 6;11(1):5626. Epub 2020 Nov 6.

Department of Informatics, Technical University of Munich, Munich, Germany.

Whole-body imaging of mice is a key source of information for research. Organ segmentation is a prerequisite for quantitative analysis but is a tedious and error-prone task if done manually. Here, we present a deep learning solution called AIMOS that automatically segments major organs (brain, lungs, heart, liver, kidneys, spleen, bladder, stomach, intestine) and the skeleton in less than a second, orders of magnitude faster than prior algorithms. AIMOS matches or exceeds the segmentation quality of state-of-the-art approaches and of human experts. We exemplify direct applicability for biomedical research for localizing cancer metastases. Furthermore, we show that expert annotations are subject to human error and bias. As a consequence, we show that at least two independently created annotations are needed to assess model performance. Importantly, AIMOS addresses the issue of human bias by identifying the regions where humans are most likely to disagree, and thereby localizes and quantifies this uncertainty for improved downstream analysis. In summary, AIMOS is a powerful open-source tool to increase scalability, reduce bias, and foster reproducibility in many areas of biomedical research.
View Article and Find Full Text PDF

Download full-text PDF

Source
http://dx.doi.org/10.1038/s41467-020-19449-7DOI Listing
http://www.ncbi.nlm.nih.gov/pmc/articles/PMC7648799PMC
November 2020

Multi-domain convolutional neural network (MD-CNN) for radial reconstruction of dynamic cardiac MRI.

Magn Reson Med 2021 03 13;85(3):1195-1208. Epub 2020 Sep 13.

Department of Medicine (Cardiovascular Division), Beth Israel Deaconess Medical Center and Harvard Medical School, Boston, Massachusetts, USA.

Purpose: Cardiac MR cine imaging allows accurate and reproducible assessment of cardiac function. However, its long scan time not only limits the spatial and temporal resolutions but is challenging in patients with breath-holding difficulty or non-sinus rhythms. To reduce scan time, we propose a multi-domain convolutional neural network (MD-CNN) for fast reconstruction of highly undersampled radial cine images.

Methods: MD-CNN is a complex-valued network that processes MR data in k-space and image domains via k-space interpolation and image-domain subnetworks for residual artifact suppression. MD-CNN exploits spatio-temporal correlations across timeframes and multi-coil redundancies to enable high acceleration. Radial cine data were prospectively collected in 108 subjects (50 ± 17 y, 72 males) using retrospective-gated acquisition with 80%:20% split for training/testing. Images were reconstructed by MD-CNN and k-t Radial Sparse-Sense(kt-RASPS) using an undersampled dataset (14 of 196 acquired views; relative acceleration rate = 14). MD-CNN images were evaluated quantitatively using mean-squared-error (MSE) and structural similarity index (SSIM) relative to reference images, and qualitatively by three independent readers for left ventricular (LV) border sharpness and temporal fidelity using 5-point Likert-scale (1-non-diagnostic, 2-poor, 3-fair, 4-good, and 5-excellent).

Results: MD-CNN showed improved MSE and SSIM compared to kt-RASPS (0.11 ± 0.10 vs. 0.61 ± 0.51, and 0.87 ± 0.07 vs. 0.72 ± 0.07, respectively; P < .01). Qualitatively, MD-CCN significantly outperformed kt-RASPS in LV border sharpness (3.87 ± 0.66 vs. 2.71 ± 0.58 at end-diastole, and 3.57 ± 0.6 vs. 2.56 ± 0.6 at end-systole, respectively; P < .01) and temporal fidelity (3.27 ± 0.65 vs. 2.59 ± 0.59; P < .01).

Conclusion: MD-CNN reduces the scan time of cine imaging by a factor of 23.3 and provides superior image quality compared to kt-RASPS.
View Article and Find Full Text PDF

Download full-text PDF

Source
http://dx.doi.org/10.1002/mrm.28485DOI Listing
March 2021

Spatial Distribution of Focal Lesions in Whole-Body MRI and Influence of MRI Protocol on Staging in Patients with Smoldering Multiple Myeloma According to the New SLiM-CRAB-Criteria.

Cancers (Basel) 2020 Sep 7;12(9). Epub 2020 Sep 7.

Institute of Diagnostic and Interventional Radiology, Paediatric Radiology and Neuroradiology, University Medical Centre Rostock, Ernst-Heydemann-Str. 6, 18057 Rostock, Germany.

The purpose of this study was to assess how different MRI protocols (spinal vs. spinal plus pelvic vs. whole-body (wb)-MRI) affect staging in patients with smoldering multiple myeloma (SMM), according to the SLiM-CRAB-criterion '>1 focal lesion (FL) in MRI'. In this retrospective study, a baseline cohort of 147 SMM patients with wb-MRI at initial diagnosis was investigated, including prognostic data regarding development of CRAB-criteria. Fifty-two patients formed a follow-up cohort with a median of three wb-MRIs. The locations of all FLs were determined and it was calculated how staging decisions regarding the criterion '>1 FL in MRI' would have been made if only a limited anatomic area (spine vs. spine plus pelvis) would have been covered by the MRI protocol. Furthermore, subgroups of patients selected by different cutoff-protocol-combinations were compared regarding their prognosis for development of CRAB-criteria. With an MRI protocol limited to spine/spine plus pelvis, only 28%/64% of patients who actually had >1 FL in wb-MRI would have been rated correctly as having '>1 FL in MRI'. Fifty-four percent/36% of patients with exactly 1 FL in spine/spine plus pelvis revealed >1 FL when the entire wb-MRI was analyzed. During follow-up, four more patients developed >1 FL in wb-MRI; both limited MRI protocols would have detected only one of these four patients as having >1 FL at the correct timepoint. Having >1 FL in spine/in spine plus pelvis/in the whole body was associated with a 43%/57%/49% probability of developing CRAB-criteria within 2 years. Patients with >3 FL in spine plus pelvis and patients with >4 FL in the whole body had an 80% probability to develop CRAB-criteria within 2 years. MRI protocols limited to the spine or to spine plus pelvis lead to substantial underdiagnoses of patients who actually have >1 FL in wb-MRI at baseline and during follow-up, which influences staging and treatment decisions according to the current SLiM-CRAB criteria. However, given the spatial distribution of FLs and the analysis on clinical course of patients indicates that the cutoff for the number of FLs should be adopted according to the MRI protocol when using MRI for staging in SMM.
View Article and Find Full Text PDF

Download full-text PDF

Source
http://dx.doi.org/10.3390/cancers12092537DOI Listing
http://www.ncbi.nlm.nih.gov/pmc/articles/PMC7563298PMC
September 2020

Rapid three-dimensional multiparametric MRI with quantitative transient-state imaging.

Sci Rep 2020 08 13;10(1):13769. Epub 2020 Aug 13.

Imago7 Foundation, Pisa, Italy.

Novel methods for quantitative, transient-state multiparametric imaging are increasingly being demonstrated for assessment of disease and treatment efficacy. Here, we build on these by assessing the most common Non-Cartesian readout trajectories (2D/3D radials and spirals), demonstrating efficient anti-aliasing with a k-space view-sharing technique, and proposing novel methods for parameter inference with neural networks that incorporate the estimation of proton density. Our results show good agreement with gold standard and phantom references for all readout trajectories at 1.5 T and 3 T. Parameters inferred with the neural network were within 6.58% difference from the parameters inferred with a high-resolution dictionary. Concordance correlation coefficients were above 0.92 and the normalized root mean squared error ranged between 4.2 and 12.7% with respect to gold-standard phantom references for T1 and T2. In vivo acquisitions demonstrate sub-millimetric isotropic resolution in under five minutes with reconstruction and inference times < 7 min. Our 3D quantitative transient-state imaging approach could enable high-resolution multiparametric tissue quantification within clinically acceptable acquisition and reconstruction times.
View Article and Find Full Text PDF

Download full-text PDF

Source
http://dx.doi.org/10.1038/s41598-020-70789-2DOI Listing
http://www.ncbi.nlm.nih.gov/pmc/articles/PMC7427097PMC
August 2020

Silent 3D MR sequence for quantitative and multicontrast T1 and proton density imaging.

Phys Med Biol 2020 09 16;65(18):185010. Epub 2020 Sep 16.

Technical University Munich, Garching, Germany. GE Global Research Europe, Munich, Germany.

This study aims to develop a silent, fast and 3D method for T1 and proton density (PD) mapping, while generating time series of T1-weighted (T1w) images with bias-field correction. Undersampled T1w images at different effective inversion times (TIs) were acquired using the inversion recovery prepared RUFIS sequence with an interleaved k-space trajectory. Unaliased images were reconstructed by constraining the signal evolution to a temporal subspace which was learned from the signal model. Parameter maps were obtained by fitting the data to the signal model, and bias-field correction was conducted on T1w images. Accuracy and repeatability of the method was accessed in repeated experiments with phantom and volunteers. For the phantom study, T1 values obtained by the proposed method were highly consistent with values from the gold standard method, R = 0.9976. Coefficients of variation (CVs) ranged from 0.09% to 0.83%. For the volunteer study, T1 values from gray and white matter regions were consistent with literature values, and peaks of gray and white matter can be clearly delineated on whole-brain T1 histograms. CVs ranged from 0.01% to 2.30%. The acoustic noise measured at the scanner isocenter was 2.6 dBA higher compared to the in-bore background. Rapid and with low acoustic noise, the proposed method is shown to produce accurate T1 and PD maps with high repeatability by reconstructing sparsely sampled T1w images at different TIs using temporal subspace. Our approach can greatly enhance patient comfort during examination and therefore increase the acceptance of the procedure.
View Article and Find Full Text PDF

Download full-text PDF

Source
http://dx.doi.org/10.1088/1361-6560/aba5e8DOI Listing
September 2020

Gold Nanoparticle Mediated Multi-Modal CT Imaging of Hsp70 Membrane-Positive Tumors.

Cancers (Basel) 2020 May 22;12(5). Epub 2020 May 22.

Central Institute for Translational Cancer Research (TranslaTUM), Klinikum rechts der Isar der Technischen Universität München, 81675 Munich, Germany.

Imaging techniques such as computed tomographies (CT) play a major role in clinical imaging and diagnosis of malignant lesions. In recent years, metal nanoparticle platforms enabled effective payload delivery for several imaging techniques. Due to the possibility of surface modification, metal nanoparticles are predestined to facilitate molecular tumor targeting. In this work, we demonstrate the feasibility of anti-plasma membrane Heat shock protein 70 (Hsp70) antibody functionalized gold nanoparticles (cmHsp70.1-AuNPs) for tumor-specific multimodal imaging. Membrane-associated Hsp70 is exclusively presented on the plasma membrane of malignant cells of multiple tumor entities but not on corresponding normal cells, predestining this target for a tumor-selective in vivo imaging. In vitro microscopic analysis revealed the presence of cmHsp70.1-AuNPs in the cytosol of tumor cell lines after internalization via the endo-lysosomal pathway. In preclinical models, the biodistribution as well as the intratumoral enrichment of AuNPs were examined 24 h after i.v. injection in tumor-bearing mice. In parallel to spectral CT analysis, histological analysis confirmed the presence of AuNPs within tumor cells. In contrast to control AuNPs, a significant enrichment of cmHsp70.1-AuNPs has been detected selectively inside tumor cells in different tumor mouse models. Furthermore, a machine-learning approach was developed to analyze AuNP accumulations in tumor tissues and organs. In summary, utilizing mHsp70 on tumor cells as a target for the guidance of cmHsp70.1-AuNPs facilitates an enrichment and uniform distribution of nanoparticles in mHsp70-expressing tumor cells that enables various microscopic imaging techniques and spectral-CT-based tumor delineation in vivo.
View Article and Find Full Text PDF

Download full-text PDF

Source
http://dx.doi.org/10.3390/cancers12051331DOI Listing
http://www.ncbi.nlm.nih.gov/pmc/articles/PMC7281090PMC
May 2020

BraTS Toolkit: Translating BraTS Brain Tumor Segmentation Algorithms Into Clinical and Scientific Practice.

Front Neurosci 2020 29;14:125. Epub 2020 Apr 29.

Image-Based Biomedical Modeling, Department of Informatics, Technical University of Munich, Munich, Germany.

Despite great advances in brain tumor segmentation and clear clinical need, translation of state-of-the-art computational methods into clinical routine and scientific practice remains a major challenge. Several factors impede successful implementations, including data standardization and preprocessing. However, these steps are pivotal for the deployment of state-of-the-art image segmentation algorithms. To overcome these issues, we present BraTS Toolkit. BraTS Toolkit is a holistic approach to brain tumor segmentation and consists of three components: First, the BraTS Preprocessor facilitates data standardization and preprocessing for researchers and clinicians alike. It covers the entire image analysis workflow prior to tumor segmentation, from image conversion and registration to brain extraction. Second, BraTS Segmentor enables orchestration of BraTS brain tumor segmentation algorithms for generation of fully-automated segmentations. Finally, Brats Fusionator can combine the resulting candidate segmentations into consensus segmentations using fusion methods such as majority voting and iterative SIMPLE fusion. The capabilities of our tools are illustrated with a practical example to enable easy translation to clinical and scientific practice.
View Article and Find Full Text PDF

Download full-text PDF

Source
http://dx.doi.org/10.3389/fnins.2020.00125DOI Listing
http://www.ncbi.nlm.nih.gov/pmc/articles/PMC7201293PMC
April 2020

Deep complex convolutional network for fast reconstruction of 3D late gadolinium enhancement cardiac MRI.

NMR Biomed 2020 07 30;33(7):e4312. Epub 2020 Apr 30.

Department of Medicine (Cardiovascular Division), Beth Israel Deaconess Medical Center and Harvard Medical School, Boston, Massachusetts.

Several deep-learning models have been proposed to shorten MRI scan time. Prior deep-learning models that utilize real-valued kernels have limited capability to learn rich representations of complex MRI data. In this work, we utilize a complex-valued convolutional network (ℂNet) for fast reconstruction of highly under-sampled MRI data and evaluate its ability to rapidly reconstruct 3D late gadolinium enhancement (LGE) data. ℂNet preserves the complex nature and optimal combination of real and imaginary components of MRI data throughout the reconstruction process by utilizing complex-valued convolution, novel radial batch normalization, and complex activation function layers in a U-Net architecture. A prospectively under-sampled 3D LGE cardiac MRI dataset of 219 patients (17 003 images) at acceleration rates R = 3 through R = 5 was used to evaluate ℂNet. The dataset was further retrospectively under-sampled to a maximum of R = 8 to simulate higher acceleration rates. We created three reconstructions of the 3D LGE dataset using (1) ℂNet, (2) a compressed-sensing-based low-dimensional-structure self-learning and thresholding algorithm (LOST), and (3) a real-valued U-Net (realNet) with the same number of parameters as ℂNet. LOST-reconstructed data were considered the reference for training and evaluation of all models. The reconstructed images were quantitatively evaluated using mean-squared error (MSE) and the structural similarity index measure (SSIM), and subjectively evaluated by three independent readers. Quantitatively, ℂNet-reconstructed images had significantly improved MSE and SSIM values compared with realNet (MSE, 0.077 versus 0.091; SSIM, 0.876 versus 0.733, respectively; p < 0.01). Subjective quality assessment showed that ℂNet-reconstructed image quality was similar to that of compressed sensing and significantly better than that of realNet. ℂNet reconstruction was also more than 300 times faster than compressed sensing. Retrospective under-sampled images demonstrate the potential of ℂNet at higher acceleration rates. ℂNet enables fast reconstruction of highly accelerated 3D MRI with superior performance to real-valued networks, and achieves faster reconstruction than compressed sensing.
View Article and Find Full Text PDF

Download full-text PDF

Source
http://dx.doi.org/10.1002/nbm.4312DOI Listing
July 2020

Predicting Glioblastoma Recurrence from Preoperative MR Scans Using Fractional-Anisotropy Maps with Free-Water Suppression.

Cancers (Basel) 2020 Mar 19;12(3). Epub 2020 Mar 19.

Department of Diagnostic and Interventional Neuroradiology, Klinikum rechts der Isar, Technical University of Munich, 81675 Munich, Germany.

Diffusion tensor imaging (DTI), and fractional-anisotropy (FA) maps in particular, have shown promise in predicting areas of tumor recurrence in glioblastoma. However, analysis of peritumoral edema, where most recurrences occur, is impeded by free-water contamination. In this study, we evaluated the benefits of a novel, deep-learning-based approach for the free-water correction (FWC) of DTI data for prediction of later recurrence. We investigated 35 glioblastoma cases from our prospective glioma cohort. A preoperative MR image and the first MR scan showing tumor recurrence were semiautomatically segmented into areas of contrast-enhancing tumor, edema, or recurrence of the tumor. The 10th, 50th and 90th percentiles and mean of FA and mean-diffusivity (MD) values (both for the original and FWC-DTI data) were collected for areas with and without recurrence in the peritumoral edema. We found significant differences in the FWC-FA maps between areas of recurrence-free edema and areas with later tumor recurrence, where differences in noncorrected FA maps were less pronounced. Consequently, a generalized mixed-effect model had a significantly higher area under the curve when using FWC-FA maps (AUC = 0.9) compared to noncorrected maps (AUC = 0.77, < 0.001). This may reflect tumor infiltration that is not visible in conventional imaging, and may therefore reveal important information for personalized treatment decisions.
View Article and Find Full Text PDF

Download full-text PDF

Source
http://dx.doi.org/10.3390/cancers12030728DOI Listing
http://www.ncbi.nlm.nih.gov/pmc/articles/PMC7140058PMC
March 2020

Labeling Vertebrae with Two-dimensional Reformations of Multidetector CT Images: An Adversarial Approach for Incorporating Prior Knowledge of Spine Anatomy.

Radiol Artif Intell 2020 Mar 25;2(2):e190074. Epub 2020 Mar 25.

Department of Informatics (A.S., B.H.M.) and Department of Neuroradiology, School of Medicine (A.S., A.V., J.S.K.), Technical University of Munich; Department of Diagnostic and Interventional Neuroradiology, Klinikum Rechts der Isar, Ismaninger Str 22, 81675 Munich, Germany (A.S.); and Friedrich Miescher Institute for Biomedical Engineering, Basel, Switzerland (M.R.).

Purpose: To use and test a labeling algorithm that operates on two-dimensional reformations, rather than three-dimensional data to locate and identify vertebrae.

Materials And Methods: The authors improved the Btrfly Net, a fully convolutional network architecture described by Sekuboyina et al, which works on sagittal and coronal maximum intensity projections (MIPs) and augmented it with two additional components: spine localization and adversarial a priori learning. Furthermore, two variants of adversarial training schemes that incorporated the anatomic a priori knowledge into the Btrfly Net were explored. The superiority of the proposed approach for labeling vertebrae on three datasets was investigated: a public benchmarking dataset of 302 CT scans and two in-house datasets with a total of 238 CT scans. The Wilcoxon signed rank test was employed to compute the statistical significance of the improvement in performance observed with various architectural components in the authors' approach.

Results: On the public dataset, the authors' approach using the described Btrfly Net with energy-based prior encoding (Btrfly) network performed as well as current state-of-the-art methods, achieving a statistically significant ( < .001) vertebrae identification rate of 88.5% ± 0.2 (standard deviation) and localization distances of less than 7 mm. On the in-house datasets that had a higher interscan data variability, an identification rate of 85.1% ± 1.2 was obtained.

Conclusion: An identification performance comparable to existing three-dimensional approaches was achieved when labeling vertebrae on two-dimensional MIPs. The performance was further improved using the proposed adversarial training regimen that effectively enforced local spine a priori knowledge during training. Spine localization increased the generalizability of our approach by homogenizing the content in the MIPs.© RSNA, 2020.
View Article and Find Full Text PDF

Download full-text PDF

Source
http://dx.doi.org/10.1148/ryai.2020190074DOI Listing
http://www.ncbi.nlm.nih.gov/pmc/articles/PMC8017405PMC
March 2020

Machine learning analysis of whole mouse brain vasculature.

Nat Methods 2020 04 11;17(4):442-449. Epub 2020 Mar 11.

Institute for Tissue Engineering and Regenerative Medicine (iTERM), Helmholtz Zentrum München, Neuherberg, Germany.

Tissue clearing methods enable the imaging of biological specimens without sectioning. However, reliable and scalable analysis of large imaging datasets in three dimensions remains a challenge. Here we developed a deep learning-based framework to quantify and analyze brain vasculature, named Vessel Segmentation & Analysis Pipeline (VesSAP). Our pipeline uses a convolutional neural network (CNN) with a transfer learning approach for segmentation and achieves human-level accuracy. By using VesSAP, we analyzed the vascular features of whole C57BL/6J, CD1 and BALB/c mouse brains at the micrometer scale after registering them to the Allen mouse brain atlas. We report evidence of secondary intracranial collateral vascularization in CD1 mice and find reduced vascularization of the brainstem in comparison to the cerebrum. Thus, VesSAP enables unbiased and scalable quantifications of the angioarchitecture of cleared mouse brains and yields biological insights into the vascular function of the brain.
View Article and Find Full Text PDF

Download full-text PDF

Source
http://dx.doi.org/10.1038/s41592-020-0792-1DOI Listing
http://www.ncbi.nlm.nih.gov/pmc/articles/PMC7591801PMC
April 2020

Cellular and Molecular Probing of Intact Human Organs.

Cell 2020 02 13;180(4):796-812.e19. Epub 2020 Feb 13.

Insititute for Tissue Engineering and Regenerative Medicine (iTERM), Helmholtz Zentrum München, 85764 Neuherberg, Germany; Institute for Stroke and Dementia Research (ISD), University Hospital, Ludwig Maximilian University of Munich (LMU), 81377 Munich, Germany; Munich Cluster for Systems Neurology (SyNergy), 81377 Munich, Germany. Electronic address:

Optical tissue transparency permits scalable cellular and molecular investigation of complex tissues in 3D. Adult human organs are particularly challenging to render transparent because of the accumulation of dense and sturdy molecules in decades-aged tissues. To overcome these challenges, we developed SHANEL, a method based on a new tissue permeabilization approach to clear and label stiff human organs. We used SHANEL to render the intact adult human brain and kidney transparent and perform 3D histology with antibodies and dyes in centimeters-depth. Thereby, we revealed structural details of the intact human eye, human thyroid, human kidney, and transgenic pig pancreas at the cellular resolution. Furthermore, we developed a deep learning pipeline to analyze millions of cells in cleared human brain tissues within hours with standard lab computers. Overall, SHANEL is a robust and unbiased technology to chart the cellular and molecular architecture of large intact mammalian organs.
View Article and Find Full Text PDF

Download full-text PDF

Source
http://dx.doi.org/10.1016/j.cell.2020.01.030DOI Listing
http://www.ncbi.nlm.nih.gov/pmc/articles/PMC7557154PMC
February 2020
-->