Publications by authors named "Spyridon Bakas"

43 Publications

Radiomics analysis for predicting pembrolizumab response in patients with advanced rare cancers.

J Immunother Cancer 2021 Apr;9(4)

Department of Investigational Cancer Therapeutics, The University of Texas MD Anderson Cancer Center, Houston, Texas, USA

Background: We present a radiomics-based model for predicting response to pembrolizumab in patients with advanced rare cancers.

Methods: The study included 57 patients with advanced rare cancers who were enrolled in our phase II clinical trial of pembrolizumab. Tumor response was evaluated using Response Evaluation Criteria in Solid Tumors (RECIST) 1.1 and immune-related RECIST (irRECIST). Patients were categorized as 20 "controlled disease" (stable disease, partial response, or complete response) or 37 progressive disease). We used 3D-slicer to segment target lesions on standard-of-care, pretreatment contrast enhanced CT scans. We extracted 610 features (10 histogram-based features and 600 second-order texture features) from each volume of interest. Least absolute shrinkage and selection operator logistic regression was used to detect the most discriminatory features. Selected features were used to create a classification model, using XGBoost, for the prediction of tumor response to pembrolizumab. Leave-one-out cross-validation was performed to assess model performance.

Findings: The 10 most relevant radiomics features were selected; XGBoost-based classification successfully differentiated between controlled disease (complete response, partial response, stable disease) and progressive disease with high accuracy, sensitivity, and specificity in patients assessed by RECIST (94.7%, 97.3%, and 90%, respectively; p<0.001) and in patients assessed by irRECIST (94.7%, 93.9%, and 95.8%, respectively; p<0.001). Additionally, the common features of the RECIST and irRECIST groups also highly predicted pembrolizumab response with accuracy, sensitivity, specificity, and p value of 94.7%, 97%, 90%, p<0.001% and 96%, 96%, 95%, p<0.001, respectively.

Conclusion: Our radiomics-based signature identified imaging differences that predicted pembrolizumab response in patients with advanced rare cancer.

Interpretation: Our radiomics-based signature identified imaging differences that predicted pembrolizumab response in patients with advanced rare cancer.
View Article and Find Full Text PDF

Download full-text PDF

Source
http://dx.doi.org/10.1136/jitc-2020-001752DOI Listing
http://www.ncbi.nlm.nih.gov/pmc/articles/PMC8051405PMC
April 2021

Integrative radiomic analysis for pre-surgical prognostic stratification of glioblastoma patients: from advanced to basic MRI protocols.

Proc SPIE Int Soc Opt Eng 2020 Feb 16;11315. Epub 2020 Mar 16.

University of Pennsylvania, Department of Radiology, Center for Biomedical Image Computing and Analytics, University of Pennsylvania, USA.

Glioblastoma, the most common and aggressive adult brain tumor, is considered non-curative at diagnosis. Current literature shows promise on imaging-based overall survival prediction for patients with glioblastoma while integrating advanced (structural, perfusion, and diffusion) multipara metric magnetic resonance imaging (Adv-mpMRI). However, most patients prior to initiation of therapy typically undergo only basic structural mpMRI (Bas-mpMRI, i.e., T1,T1-Gd,T2,T2-FLAIR) pre-operatively, rather than Adv-mpMRI. Here we assess a retrospective cohort of 101 glioblastoma patients with available Adv-mpMRI from a previous study, which has shown that an initial feature panel (IFP) extracted from Adv-mpMRI can yield accurate overall survival stratification. We further focus on demonstrating that equally accurate prediction models can be constructed using augmented feature panels (AFP) extracted solely from Bas-mpMRI, obviating the need for using Adv-mpMRI. The classification accuracy of the model utilizing Adv-mpMRI protocols and the IFP was 72.77%, and improved to 74.26% when utilizing the AFP on Bas-mpMRI. Furthermore, Kaplan-Meier analysis demonstrated superior classification of subjects into short-, intermediate-, and long-survivor classes when using AFPon Basic-mpMRI. This quantitative evaluation indicates that accurate survival prediction in glioblastoma patients is feasible by using solely Bas-mpMRI and integrative radiomic analysis can compensate for the lack of Adv-mpMRI. Our finding holds promise for predicting overall survival based on commonly-acquired Bas-mpMRI, and hence for potential generalization across multiple institutions that may not have access to Adv-mpMRI, facilitating better patient selection.
View Article and Find Full Text PDF

Download full-text PDF

Source
http://dx.doi.org/10.1117/12.2566505DOI Listing
http://www.ncbi.nlm.nih.gov/pmc/articles/PMC7971448PMC
February 2020

Analyzing magnetic resonance imaging data from glioma patients using deep learning.

Comput Med Imaging Graph 2021 Mar 2;88:101828. Epub 2020 Dec 2.

Center for Biomedical Image Computing and Analytics (CBICA), University of Pennsylvania, Philadelphia, PA, USA. Electronic address:

The quantitative analysis of images acquired in the diagnosis and treatment of patients with brain tumors has seen a significant rise in the clinical use of computational tools. The underlying technology to the vast majority of these tools are machine learning methods and, in particular, deep learning algorithms. This review offers clinical background information of key diagnostic biomarkers in the diagnosis of glioma, the most common primary brain tumor. It offers an overview of publicly available resources and datasets for developing new computational tools and image biomarkers, with emphasis on those related to the Multimodal Brain Tumor Segmentation (BraTS) Challenge. We further offer an overview of the state-of-the-art methods in glioma image segmentation, again with an emphasis on publicly available tools and deep learning algorithms that emerged in the context of the BraTS challenge.
View Article and Find Full Text PDF

Download full-text PDF

Source
http://dx.doi.org/10.1016/j.compmedimag.2020.101828DOI Listing
http://www.ncbi.nlm.nih.gov/pmc/articles/PMC8040671PMC
March 2021

Multi-institutional noninvasive in vivo characterization of , 1p/19q, and EGFRvIII in glioma using neuro-Cancer Imaging Phenomics Toolkit (neuro-CaPTk).

Neurooncol Adv 2020 Dec 23;2(Suppl 4):iv22-iv34. Epub 2021 Jan 23.

Center for Biomedical Image Computing and Analytics, University of Pennsylvania, Philadelphia, Pennsylvania, USA.

Background: Gliomas represent a biologically heterogeneous group of primary brain tumors with uncontrolled cellular proliferation and diffuse infiltration that renders them almost incurable, thereby leading to a grim prognosis. Recent comprehensive genomic profiling has greatly elucidated the molecular hallmarks of gliomas, including the mutations in and ( and ), loss of chromosomes 1p and 19q (1p/19q), and epidermal growth factor receptor variant III (EGFRvIII). Detection of these molecular alterations is based on ex vivo analysis of surgically resected tissue specimen that sometimes is not adequate for testing and/or does not capture the spatial tumor heterogeneity of the neoplasm.

Methods: We developed a method for detection of radiogenomic markers of both in lower-grade gliomas (WHO grade II and III tumors) and glioblastoma (WHO grade IV), 1p/19q in -mutant lower-grade gliomas, and EGFRvIII in glioblastoma. Preoperative MRIs of 473 glioma patients from 3 of the studies participating in the ReSPOND consortium (collection I: Hospital of the University of Pennsylvania [HUP: = 248], collection II: The Cancer Imaging Archive [TCIA; = 192], and collection III: Ohio Brain Tumor Study [OBTS, = 33]) were collected. Neuro-Cancer Imaging Phenomics Toolkit (neuro-CaPTk), a modular platform available for cancer imaging analytics and machine learning, was leveraged to extract histogram, shape, anatomical, and texture features from delineated tumor subregions and to integrate these features using support vector machine to generate models predictive of , 1p/19q, and EGFRvIII. The models were validated using 3 configurations: (1) 70-30% training-testing splits or 10-fold cross-validation within individual collections, (2) 70-30% training-testing splits within merged collections, and (3) training on one collection and testing on another.

Results: These models achieved a classification accuracy of 86.74% (HUP), 85.45% (TCIA), and 75.15% (TCIA) in identifying EGFRvIII, , and 1p/19q, respectively, in configuration I. The model, when applied on combined data in configuration II, yielded a classification success rate of 82.50% in predicting mutation (HUP + TCIA + OBTS). The model when trained on TCIA dataset yielded classification accuracy of 84.88% in predicting in HUP dataset.

Conclusions: Using machine learning algorithms, high accuracy was achieved in the prediction of , 1p/19q, and EGFRvIII mutation. Neuro-CaPTk encompasses all the pipelines required to replicate these analyses in multi-institutional settings and could also be used for other radio(geno)mic analyses.
View Article and Find Full Text PDF

Download full-text PDF

Source
http://dx.doi.org/10.1093/noajnl/vdaa128DOI Listing
http://www.ncbi.nlm.nih.gov/pmc/articles/PMC7829474PMC
December 2020

Reproducibility analysis of multi-institutional paired expert annotations and radiomic features of the Ivy Glioblastoma Atlas Project (Ivy GAP) dataset.

Med Phys 2020 Dec 4;47(12):6039-6052. Epub 2020 Dec 4.

Center for Biomedical Image Computing and Analytics (CBICA), University of Pennsylvania, Philadelphia, PA, 19104, USA.

Purpose: The availability of radiographic magnetic resonance imaging (MRI) scans for the Ivy Glioblastoma Atlas Project (Ivy GAP) has opened up opportunities for development of radiomic markers for prognostic/predictive applications in glioblastoma (GBM). In this work, we address two critical challenges with regard to developing robust radiomic approaches: (a) the lack of availability of reliable segmentation labels for glioblastoma tumor sub-compartments (i.e., enhancing tumor, non-enhancing tumor core, peritumoral edematous/infiltrated tissue) and (b) identifying "reproducible" radiomic features that are robust to segmentation variability across readers/sites.

Acquisition And Validation Methods: From TCIA's Ivy GAP cohort, we obtained a paired set (n = 31) of expert annotations approved by two board-certified neuroradiologists at the Hospital of the University of Pennsylvania (UPenn) and at Case Western Reserve University (CWRU). For these studies, we performed a reproducibility study that assessed the variability in (a) segmentation labels and (b) radiomic features, between these paired annotations. The radiomic variability was assessed on a comprehensive panel of 11 700 radiomic features including intensity, volumetric, morphologic, histogram-based, and textural parameters, extracted for each of the paired sets of annotations. Our results demonstrated (a) a high level of inter-rater agreement (median value of DICE ≥0.8 for all sub-compartments), and (b) ≈24% of the extracted radiomic features being highly correlated (based on Spearman's rank correlation coefficient) to annotation variations. These robust features largely belonged to morphology (describing shape characteristics), intensity (capturing intensity profile statistics), and COLLAGE (capturing heterogeneity in gradient orientations) feature families.

Data Format And Usage Notes: We make publicly available on TCIA's Analysis Results Directory (https://doi.org/10.7937/9j41-7d44), the complete set of (a) multi-institutional expert annotations for the tumor sub-compartments, (b) 11 700 radiomic features, and (c) the associated reproducibility meta-analysis.

Potential Applications: The annotations and the associated meta-data for Ivy GAP are released with the purpose of enabling researchers toward developing image-based biomarkers for prognostic/predictive applications in GBM.
View Article and Find Full Text PDF

Download full-text PDF

Source
http://dx.doi.org/10.1002/mp.14556DOI Listing
December 2020

The future of digital health with federated learning.

NPJ Digit Med 2020 14;3:119. Epub 2020 Sep 14.

King's College London (KCL), London, UK.

Data-driven machine learning (ML) has emerged as a promising approach for building accurate and robust statistical models from medical data, which is collected in huge volumes by modern healthcare systems. Existing medical data is not fully exploited by ML primarily because it sits in data silos and privacy concerns restrict access to this data. However, without access to sufficient data, ML will be prevented from reaching its full potential and, ultimately, from making the transition from research to clinical practice. This paper considers key factors contributing to this issue, explores how federated learning (FL) may provide a solution for the future of digital health and highlights the challenges and considerations that need to be addressed.
View Article and Find Full Text PDF

Download full-text PDF

Source
http://dx.doi.org/10.1038/s41746-020-00323-1DOI Listing
http://www.ncbi.nlm.nih.gov/pmc/articles/PMC7490367PMC
September 2020

The Cancer Imaging Phenomics Toolkit (CaPTk): Technical Overview.

Brainlesion 2020 19;11993:380-394. Epub 2020 May 19.

Center for Biomedical Image Computing and Analytics (CBICA), University of Pennsylvania, Philadelphia, PA, USA.

The purpose of this manuscript is to provide an overview of the technical specifications and architecture of the ncer imaging henomics oolit (CaPTk www.cbica.upenn.edu/captk), a cross-platform, open-source, easy-to-use, and extensible software platform for analyzing 2D and 3D images, currently focusing on radiographic scans of brain, breast, and lung cancer. The primary aim of this platform is to enable swift and efficient translation of cutting-edge academic research into clinically useful tools relating to clinical quantification, analysis, predictive modeling, decision-making, and reporting workflow. CaPTk builds upon established open-source software toolkits, such as the Insight Toolkit (ITK) and OpenCV, to bring together advanced computational functionality. This functionality describes specialized, as well as general-purpose, image analysis algorithms developed during active multi-disciplinary collaborative research studies to address real clinical requirements. The target audience of CaPTk consists of both computational scientists and clinical experts. For the former it provides i) an efficient image viewer offering the ability of integrating new algorithms, and ii) a library of readily-available clinically-relevant algorithms, allowing batch-processing of multiple subjects. For the latter it facilitates the use of complex algorithms for clinically-relevant studies through a user-friendly interface, eliminating the prerequisite of a substantial computational background. CaPTk's long-term goal is to provide widely-used technology to make use of advanced quantitative imaging analytics in cancer prediction, diagnosis and prognosis, leading toward a better understanding of the biological mechanisms of cancer development.
View Article and Find Full Text PDF

Download full-text PDF

Source
http://dx.doi.org/10.1007/978-3-030-46643-5_38DOI Listing
http://www.ncbi.nlm.nih.gov/pmc/articles/PMC7402244PMC
May 2020

Towards Population-Based Histologic Stain Normalization of Glioblastoma.

Brainlesion 2020 19;11992:44-56. Epub 2020 May 19.

Center for Biomedical Image Computing and Analytics (CBICA), University of Pennsylvania, Philadelphia, PA, USA.

Glioblastoma ( ) is the most aggressive type of primary malignant adult brain tumor, with very heterogeneous radio-graphic, histologic, and molecular profiles. A growing body of advanced computational analyses are conducted towards further understanding the biology and variation in glioblastoma. To address the intrinsic heterogeneity among different computational studies, reference standards have been established to facilitate both radiographic and molecular analyses, e.g., anatomical atlas for image registration and housekeeping genes, respectively. However, there is an apparent lack of reference standards in the domain of digital pathology, where each independent study uses an arbitrarily chosen slide from their evaluation dataset for normalization purposes. In this study, we introduce a novel stain normalization approach based on a composite reference slide comprised of information from a large population of anatomically annotated hematoxylin and eosin ( ) whole-slide images from the Ivy Glioblastoma Atlas Project ( ). Two board-certified neuropathologists manually reviewed and selected annotations in 509 slides, according to the World Health Organization definitions. We computed summary statistics from each of these approved annotations and weighted them based on their percent contribution to overall slide ( ), to form a global histogram and stain vectors. Quantitative evaluation of pre- and post-normalization stain density statistics for each annotated region with PCOS > 0.05% yielded a significant (largest = 0.001, two-sided Wilcoxon rank sum test) reduction of its intensity variation for both & . Subject to further large-scale evaluation, our findings support the proposed approach as a potentially robust population-based reference for stain normalization.
View Article and Find Full Text PDF

Download full-text PDF

Source
http://dx.doi.org/10.1007/978-3-030-46640-4_5DOI Listing
http://www.ncbi.nlm.nih.gov/pmc/articles/PMC7394499PMC
May 2020

Federated learning in medicine: facilitating multi-institutional collaborations without sharing patient data.

Sci Rep 2020 07 28;10(1):12598. Epub 2020 Jul 28.

Center for Biomedical Image Computing and Analytics (CBICA), University of Pennsylvania, Richards Medical Research Laboratories, Floor 7, 3700 Hamilton Walk, Philadelphia, PA, 19104, USA.

Several studies underscore the potential of deep learning in identifying complex patterns, leading to diagnostic and prognostic biomarkers. Identifying sufficiently large and diverse datasets, required for training, is a significant challenge in medicine and can rarely be found in individual institutions. Multi-institutional collaborations based on centrally-shared patient data face privacy and ownership challenges. Federated learning is a novel paradigm for data-private multi-institutional collaborations, where model-learning leverages all available data without sharing data between institutions, by distributing the model-training to the data-owners and aggregating their results. We show that federated learning among 10 institutions results in models reaching 99% of the model quality achieved with centralized data, and evaluate generalizability on data from institutions outside the federation. We further investigate the effects of data distribution across collaborating institutions on model quality and learning patterns, indicating that increased access to data through data private multi-institutional collaborations can benefit model quality more than the errors introduced by the collaborative method. Finally, we compare with other collaborative-learning approaches demonstrating the superiority of federated learning, and discuss practical implementation considerations. Clinical adoption of federated learning is expected to lead to models trained on datasets of unprecedented size, hence have a catalytic impact towards precision/personalized medicine.
View Article and Find Full Text PDF

Download full-text PDF

Source
http://dx.doi.org/10.1038/s41598-020-69250-1DOI Listing
http://www.ncbi.nlm.nih.gov/pmc/articles/PMC7387485PMC
July 2020

iGLASS: imaging integration into the Glioma Longitudinal Analysis Consortium.

Neuro Oncol 2020 10;22(10):1545-1546

Henry Ford Cancer Institute, Hermelin Brain Tumor Center, Henry Ford Health System, Detroit, Michigan, USA.

View Article and Find Full Text PDF

Download full-text PDF

Source
http://dx.doi.org/10.1093/neuonc/noaa160DOI Listing
http://www.ncbi.nlm.nih.gov/pmc/articles/PMC7566469PMC
October 2020

Brain extraction on MRI scans in presence of diffuse glioma: Multi-institutional performance evaluation of deep learning methods and robust modality-agnostic training.

Neuroimage 2020 10 27;220:117081. Epub 2020 Jun 27.

Center for Biomedical Image Computing and Analytics (CBICA), University of Pennsylvania, Philadelphia, PA, USA; Department of Radiology, Perelman School of Medicine, University of Pennsylvania, Philadelphia, PA, USA; Department of Pathology and Laboratory Medicine, Perelman School of Medicine, University of Pennsylvania, Philadelphia, PA, USA. Electronic address:

Brain extraction, or skull-stripping, is an essential pre-processing step in neuro-imaging that has a direct impact on the quality of all subsequent processing and analyses steps. It is also a key requirement in multi-institutional collaborations to comply with privacy-preserving regulations. Existing automated methods, including Deep Learning (DL) based methods that have obtained state-of-the-art results in recent years, have primarily targeted brain extraction without considering pathologically-affected brains. Accordingly, they perform sub-optimally when applied on magnetic resonance imaging (MRI) brain scans with apparent pathologies such as brain tumors. Furthermore, existing methods focus on using only T1-weighted MRI scans, even though multi-parametric MRI (mpMRI) scans are routinely acquired for patients with suspected brain tumors. In this study, we present a comprehensive performance evaluation of recent deep learning architectures for brain extraction, training models on mpMRI scans of pathologically-affected brains, with a particular focus on seeking a practically-applicable, low computational footprint approach, generalizable across multiple institutions, further facilitating collaborations. We identified a large retrospective multi-institutional dataset of n=3340 mpMRI brain tumor scans, with manually-inspected and approved gold-standard segmentations, acquired during standard clinical practice under varying acquisition protocols, both from private institutional data and public (TCIA) collections. To facilitate optimal utilization of rich mpMRI data, we further introduce and evaluate a novel ''modality-agnostic training'' technique that can be applied using any available modality, without need for model retraining. Our results indicate that the modality-agnostic approach obtains accurate results, providing a generic and practical tool for brain extraction on scans with brain tumors.
View Article and Find Full Text PDF

Download full-text PDF

Source
http://dx.doi.org/10.1016/j.neuroimage.2020.117081DOI Listing
http://www.ncbi.nlm.nih.gov/pmc/articles/PMC7597856PMC
October 2020

Skull-Stripping of Glioblastoma MRI Scans Using 3D Deep Learning.

Brainlesion 2019 Oct 19;11992:57-68. Epub 2020 May 19.

Center for Biomedical Image Computing and Analytics (CBICA), University of Pennsylvania, Philadelphia, PA, USA.

Skull-stripping is an essential pre-processing step in computational neuro-imaging directly impacting subsequent analyses. Existing skull-stripping methods have primarily targeted non-pathologicallyaffected brains. Accordingly, they may perform suboptimally when applied on brain Magnetic Resonance Imaging (MRI) scans that have clearly discernible pathologies, such as brain tumors. Furthermore, existing methods focus on using only T1-weighted MRI scans, even though multi-parametric MRI (mpMRI) scans are routinely acquired for patients with suspected brain tumors. Here we present a performance evaluation of publicly available implementations of established 3D Deep Learning architectures for semantic segmentation (namely DeepMedic, 3D U-Net, FCN), with a particular focus on identifying a skull-stripping approach that performs well on brain tumor scans, and also has a low computational footprint. We have identified a retrospective dataset of 1,796 mpMRI brain tumor scans, with corresponding manually-inspected and verified gold-standard brain tissue segmentations, acquired during standard clinical practice under varying acquisition protocols at the Hospital of the University of Pennsylvania. Our quantitative evaluation identified DeepMedic as the best performing method ( = 97.9, = 2.68). We release this pre-trained model through the Cancer Imaging Phenomics Toolkit (CaPTk) platform.
View Article and Find Full Text PDF

Download full-text PDF

Source
http://dx.doi.org/10.1007/978-3-030-46640-4_6DOI Listing
http://www.ncbi.nlm.nih.gov/pmc/articles/PMC7311100PMC
October 2019

Overall survival prediction in glioblastoma patients using structural magnetic resonance imaging (MRI): advanced radiomic features may compensate for lack of advanced MRI modalities.

J Med Imaging (Bellingham) 2020 May 9;7(3):031505. Epub 2020 Jun 9.

University of Pennsylvania, Perelman School of Medicine, Center for Biomedical Image Computing and Analytics, Richards Medical Research Laboratories, Philadelphia, PA, United States.

Glioblastoma, the most common and aggressive adult brain tumor, is considered noncurative at diagnosis, with 14 to 16 months median survival following treatment. There is increasing evidence that noninvasive integrative analysis of radiomic features can predict overall and progression-free survival, using advanced multiparametric magnetic resonance imaging (Adv-mpMRI). If successfully applicable, such noninvasive markers can considerably influence patient management. However, most patients prior to initiation of therapy typically undergo only basic structural mpMRI (Bas-mpMRI, i.e., T1, T1-Gd, T2, and T2-fluid-attenuated inversion recovery) preoperatively, rather than Adv-mpMRI that provides additional vascularization (dynamic susceptibility contrast-MRI) and cell-density (diffusion tensor imaging) related information. We assess a retrospective cohort of 101 glioblastoma patients with available Adv-mpMRI from a previous study, which has shown that an initial feature panel (IFP, i.e., intensity, volume, location, and growth model parameters) extracted from Adv-mpMRI can yield accurate overall survival stratification. We focus on demonstrating that equally accurate prediction models can be constructed using augmented radiomic feature panels (ARFPs, i.e., integrating morphology and textural descriptors) extracted solely from widely available Bas-mpMRI, obviating the need for using Adv-mpMRI. We extracted 1612 radiomic features from distinct tumor subregions to build multivariate models that stratified patients as long-, intermediate-, or short-survivors. The classification accuracy of the model utilizing Adv-mpMRI protocols and the IFP was 72.77% and degraded to 60.89% when using only Bas-mpMRI. However, utilizing the ARFP on Bas-mpMRI improved the accuracy to 74.26%. Furthermore, Kaplan-Meier analysis demonstrated superior classification of subjects into short-, intermediate-, and long-survivor classes when using ARFP extracted from Bas-mpMRI. This quantitative evaluation indicates that accurate survival prediction in glioblastoma patients is feasible using solely Bas-mpMRI and integrative advanced radiomic features, which can compensate for the lack of Adv-mpMRI. Our finding holds promise for generalization across multiple institutions that may not have access to Adv-mpMRI and to better inform clinical decision-making about aggressive interventions and clinical trials.
View Article and Find Full Text PDF

Download full-text PDF

Source
http://dx.doi.org/10.1117/1.JMI.7.3.031505DOI Listing
http://www.ncbi.nlm.nih.gov/pmc/articles/PMC7282509PMC
May 2020

Integrated Biophysical Modeling and Image Analysis: Application to Neuro-Oncology.

Annu Rev Biomed Eng 2020 06;22:309-341

Oden Institute of Computational Engineering and Sciences, The University of Texas at Austin, Austin, Texas 78712, USA; email:

Central nervous system (CNS) tumors come with vastly heterogeneous histologic, molecular, and radiographic landscapes, rendering their precise characterization challenging. The rapidly growing fields of biophysical modeling and radiomics have shown promise in better characterizing the molecular, spatial, and temporal heterogeneity of tumors. Integrative analysis of CNS tumors, including clinically acquired multi-parametric magnetic resonance imaging (mpMRI) and the inverse problem of calibrating biophysical models to mpMRI data, assists in identifying macroscopic quantifiable tumor patterns of invasion and proliferation, potentially leading to improved () detection/segmentation of tumor subregions and () computer-aided diagnostic/prognostic/predictive modeling. This article presents a summary of () biophysical growth modeling and simulation,() inverse problems for model calibration, () these models' integration with imaging workflows, and () their application to clinically relevant studies. We anticipate that such quantitative integrative analysis may even be beneficial in a future revision of the World Health Organization (WHO) classification for CNS tumors, ultimately improving patient survival prospects.
View Article and Find Full Text PDF

Download full-text PDF

Source
http://dx.doi.org/10.1146/annurev-bioeng-062117-121105DOI Listing
http://www.ncbi.nlm.nih.gov/pmc/articles/PMC7520881PMC
June 2020

ANHIR: Automatic Non-Rigid Histological Image Registration Challenge.

IEEE Trans Med Imaging 2020 10 7;39(10):3042-3052. Epub 2020 Apr 7.

Automatic Non-rigid Histological Image Registration (ANHIR) challenge was organized to compare the performance of image registration algorithms on several kinds of microscopy histology images in a fair and independent manner. We have assembled 8 datasets, containing 355 images with 18 different stains, resulting in 481 image pairs to be registered. Registration accuracy was evaluated using manually placed landmarks. In total, 256 teams registered for the challenge, 10 submitted the results, and 6 participated in the workshop. Here, we present the results of 7 well-performing methods from the challenge together with 6 well-known existing methods. The best methods used coarse but robust initial alignment, followed by non-rigid registration, used multiresolution, and were carefully tuned for the data at hand. They outperformed off-the-shelf methods, mostly by being more robust. The best methods could successfully register over 98% of all landmarks and their mean landmark registration accuracy (TRE) was 0.44% of the image diagonal. The challenge remains open to submissions and all images are available for download.
View Article and Find Full Text PDF

Download full-text PDF

Source
http://dx.doi.org/10.1109/TMI.2020.2986331DOI Listing
http://www.ncbi.nlm.nih.gov/pmc/articles/PMC7584382PMC
October 2020

Cancer Imaging Phenomics via CaPTk: Multi-Institutional Prediction of Progression-Free Survival and Pattern of Recurrence in Glioblastoma.

JCO Clin Cancer Inform 2020 03;4:234-244

Center for Biomedical Image Computing and Analytics, Perelman School of Medicine, University of Pennsylvania, Philadelphia, PA.

Purpose: To construct a multi-institutional radiomic model that supports upfront prediction of progression-free survival (PFS) and recurrence pattern (RP) in patients diagnosed with glioblastoma multiforme (GBM) at the time of initial diagnosis.

Patients And Methods: We retrospectively identified data for patients with newly diagnosed GBM from two institutions (institution 1, n = 65; institution 2, n = 15) who underwent gross total resection followed by standard adjuvant chemoradiation therapy, with pathologically confirmed recurrence, sufficient follow-up magnetic resonance imaging (MRI) scans to reliably determine PFS, and available presurgical multiparametric MRI (MP-MRI). The advanced software suite Cancer Imaging Phenomics Toolkit (CaPTk) was leveraged to analyze standard clinical brain MP-MRI scans. A rich set of imaging features was extracted from the MP-MRI scans acquired before the initial resection and was integrated into two distinct imaging signatures for predicting mean shorter or longer PFS and near or distant RP. The predictive signatures for PFS and RP were evaluated on the basis of different classification schemes: single-institutional analysis, multi-institutional analysis with random partitioning of the data into discovery and replication cohorts, and multi-institutional assessment with data from institution 1 as the discovery cohort and data from institution 2 as the replication cohort.

Results: These predictors achieved cross-validated classification performance (ie, area under the receiver operating characteristic curve) of 0.88 (single-institution analysis) and 0.82 to 0.83 (multi-institution analysis) for prediction of PFS and 0.88 (single-institution analysis) and 0.56 to 0.71 (multi-institution analysis) for prediction of RP.

Conclusion: Imaging signatures of presurgical MP-MRI scans reveal relatively high predictability of time and location of GBM recurrence, subject to the patients receiving standard first-line chemoradiation therapy. Through its graphical user interface, CaPTk offers easy accessibility to advanced computational algorithms for deriving imaging signatures predictive of clinical outcome and could similarly be used for a variety of radiomic and radiogenomic analyses.
View Article and Find Full Text PDF

Download full-text PDF

Source
http://dx.doi.org/10.1200/CCI.19.00121DOI Listing
http://www.ncbi.nlm.nih.gov/pmc/articles/PMC7113126PMC
March 2020

The Image Biomarker Standardization Initiative: Standardized Quantitative Radiomics for High-Throughput Image-based Phenotyping.

Radiology 2020 05 10;295(2):328-338. Epub 2020 Mar 10.

From OncoRay-National Center for Radiation Research in Oncology, Faculty of Medicine and University Hospital Carl Gustav Carus, Technische Universität Dresden, Helmholtz-Zentrum Dresden-Rossendorf, Fetscherstr 74, PF 41, 01307 Dresden, Germany (A.Z., S. Leger, E.G.C.T., C.R., S. Löck); National Center for Tumor Diseases (NCT), Partner Site Dresden, Germany: German Cancer Research Center (DKFZ), Heidelberg, Germany (A.Z.); Faculty of Medicine and University Hospital Carl Gustav Carus, Technische Universität Dresden and Helmholtz Association/Helmholtz-Zentrum Dresden-Rossendorf (HZDR), Dresden, Germany (A.Z., S. Leger, E.G.C.T.); German Cancer Consortium (DKTK), Partner Site Dresden, and German Cancer Research Center (DKFZ), Heidelberg, Germany (A.Z., S. Leger, E.G.C.T., C.R., S. Löck); Medical Physics Unit, McGill University, Montréal, Canada (M.V., I.E.N.); Image Response Assessment Team Core Facility, Moffitt Cancer Center, Tampa, Fla (M.A.A.); Dana-Farber Cancer Institute, Brigham and Women's Hospital, and Harvard Medical School, Harvard University, Boston, Mass (H.J.W.L.A.); Institute of Information Systems, University of Applied Sciences Western Switzerland (HES-SO), Sierre, Switzerland (V.A., A.D., H.M.); Department of Medical Physics, Memorial Sloan Kettering Cancer Center, New York, NY (A.A.); Department of Electrical and Computer Engineering, Johns Hopkins University, Baltimore, Md (S.A.); Department of Radiology and Radiological Science, Johns Hopkins University, Baltimore, Md (S.A., A.R.); Center for Biomedical image Computing and Analytics (CBICA), University of Pennsylvania, Philadelphia, Pa (S.B., C.D., S.M.H., S.P.); Department of Radiology, Perelman School of Medicine, University of Pennsylvania, Philadelphia, Pa (S.B., C.D., S.M.H., S.P.); Department of Pathology and Laboratory Medicine, Perelman School of Medicine, University of Pennsylvania, Philadelphia, Pa (S.B.); Department of Nuclear Medicine and Molecular Imaging, University of Groningen, University Medical Center Groningen (UMCG), Groningen, the Netherlands (R.J.B., R.B., E.A.G.P.); Radiology and Nuclear Medicine, VU University Medical Centre (VUMC), Amsterdam, the Netherlands (R.B.); Department of Radiation Oncology, University Hospital Zurich, University of Zurich, Zurich, Switzerland (M.B., M.Guckenberger, S.T.L.); Fondazione Policlinico Universitario "A. Gemelli" IRCCS, Rome, Italy (L.B., N.D., R.G., J.L., V.V.); Laboratoire d'Imagerie Translationnelle en Oncologie, Université Paris Saclay, Inserm, Institut Curie, Orsay, France (I.B., C.N., F.O.); Cancer Imaging Dept, School of Biomedical Engineering and Imaging Sciences, King's College London, London, United Kingdom (G.J.R.C., V.G., M.M.S.); Department of Nuclear Medicine and Molecular Imaging, Lausanne University Hospital, Lausanne, Switzerland (A.D.); Laboratory of Medical Information Processing (LaTIM)-team ACTION (image-guided therapeutic action in oncology), INSERM, UMR 1101, IBSAM, UBO, UBL, Brest, France (M.C.D., M.H., T.U.); Department of Radiation Oncology, the Netherlands Cancer Institute (NKI), Amsterdam, the Netherlands (C.V.D.); Department of Radiology, Stanford University School of Medicine, Stanford, Calif (S.E., S.N.); Department of Radiation Oncology, Physics Division, University of Michigan, Ann Arbor, Mich (I.E.N., A.U.K.R.); Surgical Planning Laboratory, Brigham and Women's Hospital and Harvard Medical School, Harvard University, Boston, Mass (A.Y.F.); Department of Cancer Imaging and Metabolism, Moffitt Cancer Center, Tampa, Fla (R.J.G.); Department of Medical Image Computing, German Cancer Research Center (DKFZ), Heidelberg, Germany (M. Götz, F.I., K.H.M.H., J.S.); The D-Lab, Department of Precision Medicine, GROW-School for Oncology and Developmental Biology, Maastricht University Medical Centre+, Maastricht, the Netherlands (P.L., R.T.H.L.); Section for Biomedical Physics, Department of Radiation Oncology, University of Tübingen, Germany (F.L., J.S.F., D.T.); Department of Clinical Medicine, University of Bergen, Bergen, Norway (A.L.); Department of Radiation Oncology, University of California, San Francisco, Calif (O.M.); University of Geneva, Geneva, Switzerland (H.M.); Department of Electrical Engineering, Stanford University, Stanford, Calif (S.N.); Department of Medicine (Biomedical Informatics Research), Stanford University School of Medicine, Stanford, Calif (S.N.); Departments of Radiology and Physics, University of British Columbia, Vancouver, Canada (A.R.); Department of Computational Medicine and Bioinformatics, University of Michigan, Ann Arbor, Mich (A.U.K.R.); Department of Radiation Oncology, University of Groningen, University Medical Center Groningen (UMCG), Groningen, the Netherlands (N.M.S., R.J.H.M.S., L.V.v.D.); School of Engineering, Cardiff University, Cardiff, United Kingdom (E.S., P.W.); Department of Medical Physics, Velindre Cancer Centre, Cardiff, United Kingdom (E.S.); Department of Radiotherapy and Radiation Oncology, Faculty of Medicine and University Hospital Carl Gustav Carus, Technische Universität Dresden, Dresden, Germany (E.G.C.T., C.R., S. Löck), Helmholtz-Zentrum Dresden-Rossendorf, Institute of Radiooncology-OncoRay, Dresden, Germany (E.G.C.T., C.R.); Department of Nuclear Medicine, CHU Milétrie, Poitiers, France (T.U.); Department of Radiology, the Netherlands Cancer Institute (NKI), Amsterdam, the Netherlands (J.v.G.); GROW-School for Oncology and Developmental Biology, Maastricht University Medical Center, Maastricht, the Netherlands (J.v.G.); Department of Radiation Oncology, Dana-Farber Cancer Institute, Brigham and Women's Hospital, Harvard Medical School, Boston, Mass (J.v.G.); and Department of Radiology, Leiden University Medical Center (LUMC), Leiden, the Netherlands (F.H.P.v.V.).

Background Radiomic features may quantify characteristics present in medical imaging. However, the lack of standardized definitions and validated reference values have hampered clinical use. Purpose To standardize a set of 174 radiomic features. Materials and Methods Radiomic features were assessed in three phases. In phase I, 487 features were derived from the basic set of 174 features. Twenty-five research teams with unique radiomics software implementations computed feature values directly from a digital phantom, without any additional image processing. In phase II, 15 teams computed values for 1347 derived features using a CT image of a patient with lung cancer and predefined image processing configurations. In both phases, consensus among the teams on the validity of tentative reference values was measured through the frequency of the modal value and classified as follows: less than three matches, weak; three to five matches, moderate; six to nine matches, strong; 10 or more matches, very strong. In the final phase (phase III), a public data set of multimodality images (CT, fluorine 18 fluorodeoxyglucose PET, and T1-weighted MRI) from 51 patients with soft-tissue sarcoma was used to prospectively assess reproducibility of standardized features. Results Consensus on reference values was initially weak for 232 of 302 features (76.8%) at phase I and 703 of 1075 features (65.4%) at phase II. At the final iteration, weak consensus remained for only two of 487 features (0.4%) at phase I and 19 of 1347 features (1.4%) at phase II. Strong or better consensus was achieved for 463 of 487 features (95.1%) at phase I and 1220 of 1347 features (90.6%) at phase II. Overall, 169 of 174 features were standardized in the first two phases. In the final validation phase (phase III), most of the 169 standardized features could be excellently reproduced (166 with CT; 164 with PET; and 164 with MRI). Conclusion A set of 169 radiomics features was standardized, which enabled verification and calibration of different radiomics software. © RSNA, 2020 See also the editorial by Kuhl and Truhn in this issue.
View Article and Find Full Text PDF

Download full-text PDF

Source
http://dx.doi.org/10.1148/radiol.2020191145DOI Listing
http://www.ncbi.nlm.nih.gov/pmc/articles/PMC7193906PMC
May 2020

Segmentation and Classification in Digital Pathology for Glioma Research: Challenges and Deep Learning Approaches.

Front Neurosci 2020 21;14:27. Epub 2020 Feb 21.

Cancer Imaging Program, National Cancer Institute, National Institutes of Health, Bethesda, MD, United States.

Biomedical imaging Is an important source of information in cancer research. Characterizations of cancer morphology at onset, progression, and in response to treatment provide complementary information to that gleaned from genomics and clinical data. Accurate extraction and classification of both visual and latent image features Is an increasingly complex challenge due to the increased complexity and resolution of biomedical image data. In this paper, we present four deep learning-based image analysis methods from the Computational Precision Medicine (CPM) satellite event of the 21st International Medical Image Computing and Computer Assisted Intervention (MICCAI 2018) conference. One method Is a segmentation method designed to segment nuclei in whole slide tissue images (WSIs) of adult diffuse glioma cases. It achieved a Dice similarity coefficient of 0.868 with the CPM challenge datasets. Three methods are classification methods developed to categorize adult diffuse glioma cases into oligodendroglioma and astrocytoma classes using radiographic and histologic image data. These methods achieved accuracy values of 0.75, 0.80, and 0.90, measured as the ratio of the number of correct classifications to the number of total cases, with the challenge datasets. The evaluations of the four methods indicate that (1) carefully constructed deep learning algorithms are able to produce high accuracy in the analysis of biomedical image data and (2) the combination of radiographic with histologic image information improves classification performance.
View Article and Find Full Text PDF

Download full-text PDF

Source
http://dx.doi.org/10.3389/fnins.2020.00027DOI Listing
http://www.ncbi.nlm.nih.gov/pmc/articles/PMC7046596PMC
February 2020

Histopathology-validated machine learning radiographic biomarker for noninvasive discrimination between true progression and pseudo-progression in glioblastoma.

Cancer 2020 06 4;126(11):2625-2636. Epub 2020 Mar 4.

Center for Biomedical Image Computing and Analytics, University of Pennsylvania, Philadelphia, Pennsylvania.

Background: Imaging of glioblastoma patients after maximal safe resection and chemoradiation commonly demonstrates new enhancements that raise concerns about tumor progression. However, in 30% to 50% of patients, these enhancements primarily represent the effects of treatment, or pseudo-progression (PsP). We hypothesize that quantitative machine learning analysis of clinically acquired multiparametric magnetic resonance imaging (mpMRI) can identify subvisual imaging characteristics to provide robust, noninvasive imaging signatures that can distinguish true progression (TP) from PsP.

Methods: We evaluated independent discovery (n = 40) and replication (n = 23) cohorts of glioblastoma patients who underwent second resection due to progressive radiographic changes suspicious for recurrence. Deep learning and conventional feature extraction methods were used to extract quantitative characteristics from the mpMRI scans. Multivariate analysis of these features revealed radiophenotypic signatures distinguishing among TP, PsP, and mixed response that compared with similar categories blindly defined by board-certified neuropathologists. Additionally, interinstitutional validation was performed on 20 new patients.

Results: Patients who demonstrate TP on neuropathology are significantly different (P < .0001) from those with PsP, showing imaging features reflecting higher angiogenesis, higher cellularity, and lower water concentration. The accuracy of the proposed signature in leave-one-out cross-validation was 87% for predicting PsP (area under the curve [AUC], 0.92) and 84% for predicting TP (AUC, 0.83), whereas in the discovery/replication cohort, the accuracy was 87% for predicting PsP (AUC, 0.84) and 78% for TP (AUC, 0.80). The accuracy in the interinstitutional cohort was 75% (AUC, 0.80).

Conclusion: Quantitative mpMRI analysis via machine learning reveals distinctive noninvasive signatures of TP versus PsP after treatment of glioblastoma. Integration of the proposed method into clinical studies can be performed using the freely available Cancer Imaging Phenomics Toolkit.
View Article and Find Full Text PDF

Download full-text PDF

Source
http://dx.doi.org/10.1002/cncr.32790DOI Listing
http://www.ncbi.nlm.nih.gov/pmc/articles/PMC7893811PMC
June 2020

Systematic Evaluation of Image Tiling Adverse Effects on Deep Learning Semantic Segmentation.

Front Neurosci 2020 7;14:65. Epub 2020 Feb 7.

Center for Biomedical Image Computing and Analytics, University of Pennsylvania, Philadelphia, PA, United States.

Convolutional neural network (CNN) models obtain state of the art performance on image classification, localization, and segmentation tasks. Limitations in computer hardware, most notably memory size in deep learning accelerator cards, prevent relatively large images, such as those from medical and satellite imaging, from being processed as a whole in their original resolution. A fully convolutional topology, such as U-Net, is typically trained on down-sampled images and inferred on images of their original size and resolution, by simply dividing the larger image into smaller (typically overlapping) tiles, making predictions on these tiles, and stitching them back together as the prediction for the whole image. In this study, we show that this tiling technique combined with translationally-invariant nature of CNNs causes small, but relevant differences during inference that can be detrimental in the performance of the model. Here we quantify these variations in both medical (i.e., BraTS) and non-medical (i.e., satellite) images and show that training a 2D U-Net model on the whole image substantially improves the overall model performance. Finally, we compare 2D and 3D semantic segmentation models to show that providing CNN models with a wider context of the image in all three dimensions leads to more accurate and consistent predictions. Our results suggest that tiling the input to CNN models-while perhaps necessary to overcome the memory limitations in computer hardware-may lead to undesirable and unpredictable errors in the model's output that can only be adequately mitigated by increasing the input of the model to the largest possible tile size.
View Article and Find Full Text PDF

Download full-text PDF

Source
http://dx.doi.org/10.3389/fnins.2020.00065DOI Listing
http://www.ncbi.nlm.nih.gov/pmc/articles/PMC7020775PMC
February 2020

Multi-Disease Segmentation of Gliomas and White Matter Hyperintensities in the BraTS Data Using a 3D Convolutional Neural Network.

Front Comput Neurosci 2019 20;13:84. Epub 2019 Dec 20.

Department of Radiology & Biomedical Imaging, University of California, San Francisco, San Francisco, CA, United States.

An important challenge in segmenting real-world biomedical imaging data is the presence of multiple disease processes within individual subjects. Most adults above age 60 exhibit a variable degree of small vessel ischemic disease, as well as chronic infarcts, which will manifest as white matter hyperintensities (WMH) on brain MRIs. Subjects diagnosed with gliomas will also typically exhibit some degree of abnormal T2 signal due to WMH, rather than just due to tumor. We sought to develop a fully automated algorithm to distinguish and quantify these distinct disease processes within individual subjects' brain MRIs. To address this multi-disease problem, we trained a 3D U-Net to distinguish between abnormal signal arising from tumors vs. WMH in the 3D multi-parametric MRI (mpMRI, i.e., native T1-weighted, T1-post-contrast, T2, T2-FLAIR) scans of the International Brain Tumor Segmentation (BraTS) 2018 dataset ( = 285, = 66). Our trained neuroradiologist manually annotated WMH on the BraTS training subjects, finding that 69% of subjects had WMH. Our 3D U-Net model had a 4-channel 3D input patch (80 × 80 × 80) from mpMRI, four encoding and decoding layers, and an output of either four [background, active tumor (AT), necrotic core (NCR), peritumoral edematous/infiltrated tissue (ED)] or five classes (adding WMH as the fifth class). For both the four- and five-class output models, the median for whole tumor (WT) extent (i.e., union of AT, ED, NCR) was 0.92 in both training and validation sets. Notably, the five-class model achieved significantly ( = 0.002) lower/better Hausdorff distances for WT extent in the training subjects. There was strong positive correlation between manually segmented and predicted volumes for WT ( = 0.96) and WMH ( = 0.89). Larger lesion volumes were positively correlated with higher/better scores for WT ( = 0.33), WMH ( = 0.34), and across all lesions ( = 0.89) on a log(10) transformed scale. While the median for WMH was 0.42 across training subjects with WMH, the median was 0.62 for those with at least 5 cm of WMH. We anticipate the development of computational algorithms that are able to model multiple diseases within a single subject will be a critical step toward translating and integrating artificial intelligence systems into the heterogeneous real-world clinical workflow.
View Article and Find Full Text PDF

Download full-text PDF

Source
http://dx.doi.org/10.3389/fncom.2019.00084DOI Listing
http://www.ncbi.nlm.nih.gov/pmc/articles/PMC6933520PMC
December 2019

Multivariate Analysis of Preoperative Magnetic Resonance Imaging Reveals Transcriptomic Classification of Glioblastoma Patients.

Front Comput Neurosci 2019 12;13:81. Epub 2019 Dec 12.

Center for Biomedical Image Computing and Analytics, Perelman School of Medicine, University of Pennsylvania, Philadelphia, PA, United States.

Glioblastoma, the most frequent primary malignant brain neoplasm, is genetically diverse and classified into four transcriptomic subtypes, i. e., classical, mesenchymal, proneural, and neural. Currently, detection of transcriptomic subtype is based on analysis of tissue that does not capture the spatial tumor heterogeneity. In view of accumulative evidence of imaging signatures summarizing molecular features of cancer, this study seeks robust non-invasive radiographic markers of transcriptomic classification of glioblastoma, based solely on routine clinically-acquired imaging sequences. A pre-operative retrospective cohort of 112 pathology-proven glioblastoma patients, having multi-parametric MRI (T1, T1-Gd, T2, T2-FLAIR), collected from the Hospital of the University of Pennsylvania were included. Following tumor segmentation into distinct radiographic sub-regions, diverse imaging features were extracted and support vector machines were employed to multivariately integrate these features and derive an imaging signature of transcriptomic subtype. Extracted features included intensity distributions, volume, morphology, statistics, tumors' anatomical location, and texture descriptors for each tumor sub-region. The derived signature was evaluated against the transcriptomic subtype of surgically-resected tissue specimens, using a 5-fold cross-validation method and a receiver-operating-characteristics analysis. The proposed model was 71% accurate in distinguishing among the four transcriptomic subtypes. The accuracy (sensitivity/specificity) for distinguishing each subtype (classical, mesenchymal, proneural, neural) from the rest was equal to 88.4% (71.4/92.3), 75.9% (83.9/72.8), 82.1% (73.1/84.9), and 75.9% (79.4/74.4), respectively. The findings were also replicated in The Cancer Genomic Atlas glioblastoma dataset. The obtained imaging signature for the classical subtype was dominated by associations with features related to edge sharpness, whereas for the mesenchymal subtype had more pronounced presence of higher T2 and T2-FLAIR signal in edema, and higher volume of enhancing tumor and edema. The proneural and neural subtypes were characterized by the lower T1-Gd signal in enhancing tumor and higher T2-FLAIR signal in edema, respectively. Our results indicate that quantitative multivariate analysis of features extracted from clinically-acquired MRI may provide a radiographic biomarker of the transcriptomic profile of glioblastoma. Importantly our findings can be influential in surgical decision-making, treatment planning, and assessment of inoperable tumors.
View Article and Find Full Text PDF

Download full-text PDF

Source
http://dx.doi.org/10.3389/fncom.2019.00081DOI Listing
http://www.ncbi.nlm.nih.gov/pmc/articles/PMC6923885PMC
December 2019

Multi-stage Association Analysis of Glioblastoma Gene Expressions with Texture and Spatial Patterns.

Brainlesion 2019 26;11383:239-250. Epub 2019 Jan 26.

University Hospital of Zürich, Zürich, Switzerland.

Glioblastoma is the most aggressive malignant primary brain tumor with a poor prognosis. Glioblastoma heterogeneous neuroimaging, pathologic, and molecular features provide opportunities for subclassification, prognostication, and the development of targeted therapies. Magnetic resonance imaging has the capability of quantifying specific phenotypic imaging features of these tumors. Additional insight into disease mechanism can be gained by exploring genetics foundations. Here, we use the gene expressions to evaluate the associations with various quantitative imaging phenomic features extracted from magnetic resonance imaging. We highlight a novel correlation by carrying out multi-stage genomewide association tests at the gene-level through a non-parametric correlation framework that allows testing multiple hypotheses about the integrated relationship of imaging phenotype-genotype more efficiently and less expensive computationally. Our result showed several novel genes previously associated with glioblastoma and other types of cancers, as the LRRC46 (chromosome 17), EPGN (chromosome 4) and TUBA1C (chromosome 12), all associated with our radiographic tumor features.
View Article and Find Full Text PDF

Download full-text PDF

Source
http://dx.doi.org/10.1007/978-3-030-11723-8_24DOI Listing
http://www.ncbi.nlm.nih.gov/pmc/articles/PMC6719702PMC
January 2019

Imaging signatures of glioblastoma molecular characteristics: A radiogenomics review.

J Magn Reson Imaging 2020 07 27;52(1):54-69. Epub 2019 Aug 27.

Center for Biomedical Image Computing and Analytics (CBICA), University of Pennsylvania, Philadelphia, Pennsylvania, USA.

Over the past few decades, the advent and development of genomic assessment methods and computational approaches have raised the hopes for identifying therapeutic targets that may aid in the treatment of glioblastoma. However, the targeted therapies have barely been successful in their effort to cure glioblastoma patients, leaving them with a grim prognosis. Glioblastoma exhibits high heterogeneity, both spatially and temporally. The existence of different genetic subpopulations in glioblastoma allows this tumor to adapt itself to environmental forces. Therefore, patients with glioblastoma respond poorly to the prescribed therapies, as treatments are directed towards the whole tumor and not to the specific genetic subregions. Genomic alterations within the tumor develop distinct radiographic phenotypes. In this regard, MRI plays a key role in characterizing molecular signatures of glioblastoma, based on regional variations and phenotypic presentation of the tumor. Radiogenomics has emerged as a (relatively) new field of research to explore the connections between genetic alterations and imaging features. Radiogenomics offers numerous advantages, including noninvasive and global assessment of the tumor and its response to therapies. In this review, we summarize the potential role of radiogenomic techniques to stratify patients according to their specific tumor characteristics with the goal of designing patient-specific therapies. Level of Evidence: 5 Technical Efficacy: Stage 2 J. Magn. Reson. Imaging 2020;52:54-69.
View Article and Find Full Text PDF

Download full-text PDF

Source
http://dx.doi.org/10.1002/jmri.26907DOI Listing
http://www.ncbi.nlm.nih.gov/pmc/articles/PMC7457548PMC
July 2020

Patient-Specific Registration of Pre-operative and Post-recurrence Brain Tumor MRI Scans.

Brainlesion 2019 26;11383:105-114. Epub 2019 Jan 26.

Department of Computer Science, UNC Chapel Hill, Chapel Hill, NC, USA.

Registering brain magnetic resonance imaging (MRI) scans containing pathologies is challenging primarily due to large deformations caused by the pathologies, leading to missing correspondences between scans. However, the registration task is important and directly related to personalized medicine, as registering between and scans may allow the evaluation of tumor infiltration and recurrence. While many registration methods exist, most of them do not specifically account for pathologies. Here, we propose a framework for the registration of longitudinal image-pairs of individual patients diagnosed with glioblastoma. Specifically, we present a combined image registration/reconstruction approach, which makes use of a patient-specific principal component analysis (PCA) model of image appearance to register baseline pre-operative and post-recurrence brain tumor scans. Our approach uses the post-recurrence scan to construct a patient-specific model, which then guides the registration of the pre-operative scan. Quantitative and qualitative evaluations of our framework on 10 patient image-pairs indicate that it provides excellent registration performance without requiring (1) any human intervention or (2) prior knowledge of tumor location, growth or appearance.
View Article and Find Full Text PDF

Download full-text PDF

Source
http://dx.doi.org/10.1007/978-3-030-11723-8_10DOI Listing
http://www.ncbi.nlm.nih.gov/pmc/articles/PMC6599177PMC
January 2019

Multi-Institutional Deep Learning Modeling Without Sharing Patient Data: A Feasibility Study on Brain Tumor Segmentation.

Brainlesion 2019 26;11383:92-104. Epub 2019 Jan 26.

Center for Biomedical Image Computing and Analytics (CBICA), University of Pennsylvania, Philadelphia, PA 19104, USA.

Deep learning models for semantic segmentation of images require large amounts of data. In the medical imaging domain, acquiring sufficient data is a significant challenge. Labeling medical image data requires expert knowledge. Collaboration between institutions could address this challenge, but sharing medical data to a centralized location faces various legal, privacy, technical, and data-ownership challenges, especially among international institutions. In this study, we introduce the first use of federated learning for multi-institutional collaboration, enabling deep learning modeling without sharing patient data. Our quantitative results demonstrate that the performance of federated semantic segmentation models (Dice=0.852) on multimodal brain scans is similar to that of models trained by sharing data (Dice=0.862). We compare federated learning with two alternative collaborative learning methods and find that they fail to match the performance of federated learning.
View Article and Find Full Text PDF

Download full-text PDF

Source
http://dx.doi.org/10.1007/978-3-030-11723-8_9DOI Listing
http://www.ncbi.nlm.nih.gov/pmc/articles/PMC6589345PMC
January 2019

Computational staining of unlabelled tissue.

Nat Biomed Eng 2019 06;3(6):425-426

Center for Biomedical Image Computing and Analytics, Richards Medical Research Laboratories, University of Pennsylvania, Philadelphia, PA, USA.

View Article and Find Full Text PDF

Download full-text PDF

Source
http://dx.doi.org/10.1038/s41551-019-0414-3DOI Listing
June 2019

Precision diagnostics based on machine learning-derived imaging signatures.

Magn Reson Imaging 2019 12 6;64:49-61. Epub 2019 May 6.

Center for Biomedical Image Computing and Analytics, University of Pennsylvania, United States of America.

The complexity of modern multi-parametric MRI has increasingly challenged conventional interpretations of such images. Machine learning has emerged as a powerful approach to integrating diverse and complex imaging data into signatures of diagnostic and predictive value. It has also allowed us to progress from group comparisons to imaging biomarkers that offer value on an individual basis. We review several directions of research around this topic, emphasizing the use of machine learning in personalized predictions of clinical outcome, in breaking down broad umbrella diagnostic categories into more detailed and precise subtypes, and in non-invasively estimating cancer molecular characteristics. These methods and studies contribute to the field of precision medicine, by introducing more specific diagnostic and predictive biomarkers of clinical outcome, therefore pointing to better matching of treatments to patients.
View Article and Find Full Text PDF

Download full-text PDF

Source
http://dx.doi.org/10.1016/j.mri.2019.04.012DOI Listing
http://www.ncbi.nlm.nih.gov/pmc/articles/PMC6832825PMC
December 2019

Evaluation of Indirect Methods for Motion Compensation in 2-D Focal Liver Lesion Contrast-Enhanced Ultrasound (CEUS) Imaging.

Ultrasound Med Biol 2019 06 2;45(6):1380-1396. Epub 2019 Apr 2.

Radiology & Imaging Research Centre, Evgenidion Hospital, National and Kapodistrian University, Ilisia, Athens, Greece.

This study investigates the application and evaluation of existing indirect methods, namely point-based registration techniques, for the estimation and compensation of observed motion included in the 2-D image plane of contrast-enhanced ultrasound (CEUS) cine-loops recorded for the characterization and diagnosis of focal liver lesions (FLLs). The value of applying motion compensation in the challenging modality of CEUS is to assist in the quantification of the perfusion dynamics of an FLL in relation to its parenchyma, allowing for a potentially accurate diagnostic suggestion. Towards this end, this study also proposes a novel quantitative multi-level framework for evaluating the quantification of FLLs, which to the best of our knowledge remains undefined, notwithstanding many relevant studies. Following quantitative evaluation of 19 indirect algorithms and configurations, while also considering the requirement for computational efficiency, our results suggest that the "compact and real-time descriptor" (CARD) is the optimal indirect motion compensation method in CEUS.
View Article and Find Full Text PDF

Download full-text PDF

Source
http://dx.doi.org/10.1016/j.ultrasmedbio.2019.01.023DOI Listing
June 2019