Publications by authors named "Jayashree Kalpathy-Cramer"

152 Publications

Single-Examination Risk Prediction of Severe Retinopathy of Prematurity.

Pediatrics 2021 Nov 23. Epub 2021 Nov 23.

Departments of Ophthalmology.

Background And Objectives: Retinopathy of prematurity (ROP) is a leading cause of childhood blindness. Screening and treatment reduces this risk, but requires multiple examinations of infants, most of whom will not develop severe disease. Previous work has suggested that artificial intelligence may be able to detect incident severe disease (treatment-requiring retinopathy of prematurity [TR-ROP]) before clinical diagnosis. We aimed to build a risk model that combined artificial intelligence with clinical demographics to reduce the number of examinations without missing cases of TR-ROP.

Methods: Infants undergoing routine ROP screening examinations (1579 total eyes, 190 with TR-ROP) were recruited from 8 North American study centers. A vascular severity score (VSS) was derived from retinal fundus images obtained at 32 to 33 weeks' postmenstrual age. Seven ElasticNet logistic regression models were trained on all combinations of birth weight, gestational age, and VSS. The area under the precision-recall curve was used to identify the highest-performing model.

Results: The gestational age + VSS model had the highest performance (mean ± SD area under the precision-recall curve: 0.35 ± 0.11). On 2 different test data sets (n = 444 and n = 132), sensitivity was 100% (positive predictive value: 28.1% and 22.6%) and specificity was 48.9% and 80.8% (negative predictive value: 100.0%).

Conclusions: Using a single examination, this model identified all infants who developed TR-ROP, on average, >1 month before diagnosis with moderate to high specificity. This approach could lead to earlier identification of incident severe ROP, reducing late diagnosis and treatment while simultaneously reducing the number of ROP examinations and unnecessary physiologic stress for low-risk infants.
View Article and Find Full Text PDF

Download full-text PDF

Source
http://dx.doi.org/10.1542/peds.2021-051772DOI Listing
November 2021

Basic Artificial Intelligence Techniques: Evaluation of Artificial Intelligence Performance.

Radiol Clin North Am 2021 Nov;59(6):941-954

Radiology, Martinos Center for Biomedical Imaging, Massachusetts General Hospital, 149 13th Street, Boston, MA 02129, USA.

View Article and Find Full Text PDF

Download full-text PDF

Source
http://dx.doi.org/10.1016/j.rcl.2021.06.005DOI Listing
November 2021

RSNA-MICCAI Panel Discussion: Machine Learning for Radiology from Challenges to Clinical Applications.

Radiol Artif Intell 2021 Sep 28;3(5):e210118. Epub 2021 Jul 28.

Department of Radiology and Biomedical Imaging and Center for Intelligent Imaging, University of California San Francisco, 505 Parnassus Ave, Room M-391, San Francisco, CA 94143 (J.M.); Department of Radiology and MGH and BWH Center for Clinical Data Science, Massachusetts General Hospital, Boston, Mass (J.K.C.); Department of Radiology, Thomas Jefferson University Hospital, Philadelphia, Pa (A.F.); Sheikh Zayed Institute for Pediatric Surgical Innovation, Children's National Hospital, Washington, DC (M.G.L.); and Departments of Pediatrics and Radiology, George Washington University School of Medicine, Washington, DC (M.G.L.).

On October 5, 2020, the Medical Image Computing and Computer Assisted Intervention Society (MICCAI) 2020 conference hosted a virtual panel discussion with members of the Machine Learning Steering Subcommittee of the Radiological Society of North America. The MICCAI Society brings together scientists, engineers, physicians, educators, and students from around the world. Both societies share a vision to develop radiologic and medical imaging techniques through advanced quantitative imaging biomarkers and artificial intelligence. The panel elaborated on how collaborations between radiologists and machine learning scientists facilitate the creation and clinical success of imaging technology for radiology. This report presents structured highlights of the moderated dialogue at the panel. Back-Propagation, Artificial Neural Network Algorithms, Machine Learning Algorithms © RSNA, 2021.
View Article and Find Full Text PDF

Download full-text PDF

Source
http://dx.doi.org/10.1148/ryai.2021210118DOI Listing
http://www.ncbi.nlm.nih.gov/pmc/articles/PMC8489458PMC
September 2021

Radiology Implementation Considerations for Artificial Intelligence (AI) Applied to COVID-19, From the Special Series on AI Applications.

AJR Am J Roentgenol 2021 Oct 6. Epub 2021 Oct 6.

Department of Radiology, Mayo Clinic Florida, 4500 San Pablo Rd S, Jacksonville, FL 32224.

Hundreds of imaging-based artificial intelligence (AI) models have been developed in response to the COVID-19 pandemic. AI systems that incorporate imaging have shown promise in primary detection, severity grading, and prognostication of outcomes in COVID-19, and have enabled integration of imaging with a broad range of additional clinical and epidemiologic data. However, systematic reviews of AI models applied to COVID-19 medical imaging have highlighted problems in the field, including methodologic issues and problems in real-world deployment. Clinical use of such models should be informed by both the promise and potential pitfalls of implementation. How does a practicing radiologist make sense of this complex topic, and what factors should be considered in the implementation of AI tools for imaging of COVID-19? This critical review aims to help the radiologist understand the nuances that impact the clinical deployment of AI for imaging of COVID-19. We review imaging use cases for AI models in COVID-19 (e.g., diagnosis, severity assessment, and prognostication) and explore considerations for AI model development and testing, deployment infrastructure, clinical user interfaces, quality control, and institutional review board and regulatory approvals, with a practical focus on what a radiologist should consider when implementing an AI tool for COVID-19.
View Article and Find Full Text PDF

Download full-text PDF

Source
http://dx.doi.org/10.2214/AJR.21.26717DOI Listing
October 2021

DeepStrain: A Deep Learning Workflow for the Automated Characterization of Cardiac Mechanics.

Front Cardiovasc Med 2021 3;8:730316. Epub 2021 Sep 3.

Department of Radiology, Athinoula A. Martinos Center for Biomedical Imaging, Massachusetts General Hospital and Harvard Medical School, Boston, MA, United States.

Myocardial strain analysis from cinematic magnetic resonance imaging (cine-MRI) data provides a more thorough characterization of cardiac mechanics than volumetric parameters such as left-ventricular ejection fraction, but sources of variation including segmentation and motion estimation have limited its wider clinical use. We designed and validated a fast, fully-automatic deep learning (DL) workflow to generate both volumetric parameters and strain measures from cine-MRI data consisting of segmentation and motion estimation convolutional neural networks. The final motion network design, loss function, and associated hyperparameters are the result of a thorough implementation that we carefully planned specific for strain quantification, tested, and compared to other potential alternatives. The optimal configuration was trained using healthy and cardiovascular disease (CVD) subjects ( = 150). DL-based volumetric parameters were correlated (>0.98) and without significant bias relative to parameters derived from manual segmentations in 50 healthy and CVD test subjects. Compared to landmarks manually-tracked on tagging-MRI images from 15 healthy subjects, landmark deformation using DL-based motion estimates from paired cine-MRI data resulted in an end-point-error of 2.9 ± 1.5 mm. Measures of end-systolic global strain from these cine-MRI data showed no significant biases relative to a tagging-MRI reference method. On 10 healthy subjects, intraclass correlation coefficient for intra-scanner repeatability was good to excellent (>0.75) for all global measures and most polar map segments. In conclusion, we developed and evaluated the first end-to-end learning-based workflow for automated strain analysis from cine-MRI data to quantitatively characterize cardiac mechanics of healthy and CVD subjects.
View Article and Find Full Text PDF

Download full-text PDF

Source
http://dx.doi.org/10.3389/fcvm.2021.730316DOI Listing
http://www.ncbi.nlm.nih.gov/pmc/articles/PMC8446607PMC
September 2021

Lung Nodule Malignancy Prediction in Sequential CT Scans: Summary of ISBI 2018 Challenge.

IEEE Trans Med Imaging 2021 Jul 15;PP. Epub 2021 Jul 15.

Lung cancer is by far the leading cause of cancer death in the US. Recent studies have demonstrated the effectiveness of screening using low dose CT (LDCT) in reducing lung cancer related mortality. While lung nodules are detected with a high rate of sensitivity, this exam has a low specificity rate and it is still difficult to separate benign and malignant lesions. The ISBI 2018 Lung Nodule Malignancy Prediction Challenge, developed by a team from the Quantitative Imaging Network of the National Cancer Institute, was focused on the prediction of lung nodule malignancy from two sequential LDCT screening exams using automated (non-manual) algorithms. We curated a cohort of 100 subjects who participated in the National Lung Screening Trial and had established pathological diagnoses. Data from 30 subjects were randomly selected for training and the remaining was used for testing. Participants were evaluated based on the area under the receiver operating characteristic curve (AUC) of nodule-wise malignancy scores generated by their algorithms on the test set. The challenge had 17 participants, with 11 teams submitting reports with method description, mandated by the challenge rules. Participants used quantitative methods, resulting in a reporting test AUC ranging from 0.698 to 0.913. The top five contestants used deep learning approaches, reporting an AUC between 0.87 -0.91. The team's predictor did not achieve significant differences from each other nor from a volume change estimate (p=.05 with Bonferroni-Holm's correction).
View Article and Find Full Text PDF

Download full-text PDF

Source
http://dx.doi.org/10.1109/TMI.2021.3097665DOI Listing
July 2021

Deep Learning-Based Automatic Tumor Burden Assessment of Pediatric High-Grade Gliomas, Medulloblastomas, and Other Leptomeningeal Seeding Tumors.

Neuro Oncol 2021 Jun 26. Epub 2021 Jun 26.

Department of Diagnostic Imaging, Rhode Island Hospital and Alpert Medical School of Brown University, Providence, RI, USA.

Background: Longitudinal measurement of tumor burden with MRI is an essential component of response assessment in pediatric brain tumors. We developed a fully automated pipeline for the segmentation of tumors in pediatric high-grade gliomas, medulloblastomas, and leptomeningeal seeding tumors. We further developed an algorithm for automatic 2D and volumetric size measurement of tumors.

Methods: A preoperative and postoperative cohort were randomly split into training and testing sets in a 4:1 ratio. A 3D U-Net neural network was trained to automatically segment the tumor on T1 contrast-enhanced and T2/FLAIR images. The product of the maximum bidimensional diameters according to the RAPNO criteria (AutoRAPNO) was determined. Performance was compared to that of two expert human raters who performed assessments independently. Volumetric measurements of predicted and expert segmentations were computationally derived and compared.

Results: A total of 794 pre-operative MRIs from 794 patients and 1,003 post-operative MRIs from 122 patients were included. There was excellent agreement of volumes between preoperative and postoperative predicted and manual segmentations, with ICCs of 0.912 and 0.960 for the two preoperative and 0.947 and 0.896 for the two postoperative models. There was high agreement between AutoRAPNO scores on predicted segmentations and manually calculated scores based on manual segmentations (Rater 2 ICC=0.909; Rater 3 ICC=0.851). Lastly, the performance of AutoRAPNO was superior in repeatability to that of human raters for MRIs with multiple lesions.

Conclusions: Our automated deep learning pipeline demonstrates potential utility for response assessment in pediatric brain tumors. The tool should be further validated in prospective studies.
View Article and Find Full Text PDF

Download full-text PDF

Source
http://dx.doi.org/10.1093/neuonc/noab151DOI Listing
June 2021

MR spectroscopic imaging predicts early response to anti-angiogenic therapy in recurrent glioblastoma.

Neurooncol Adv 2021 Jan-Dec;3(1):vdab060. Epub 2021 Apr 15.

Athinoula A. Martinos Center for Biomedical Imaging, Department of Radiology, Massachusetts General Hospital, Boston, Massachusetts, USA.

Background: Determining failure to anti-angiogenic therapy in recurrent glioblastoma (GBM) (rGBM) remains a challenge. The purpose of the study was to assess treatment response to bevacizumab-based therapy in patients with rGBM using MR spectroscopy (MRS).

Methods: We performed longitudinal MRI/MRS in 33 patients with rGBM to investigate whether changes in N-acetylaspartate (NAA)/Choline (Cho) and Lactate (Lac)/NAA from baseline to subsequent time points after treatment can predict early failures to bevacizumab-based therapies.

Results: After stratifying based on 9-month survival, longer-term survivors had increased NAA/Cho and decreased Lac/NAA levels compared to shorter-term survivors. ROC analyses for intratumoral NAA/Cho correlated with survival at 1 day, 2 weeks, 8 weeks, and 16 weeks. Intratumoral Lac/NAA ROC analyses were predictive of survival at all time points tested. At the 8-week time point, 88% of patients with decreased NAA/Cho did not survive 9 months; furthermore, 90% of individuals with an increased Lac/NAA from baseline did not survive at 9 months. No other metabolic ratios tested significantly predicted survival.

Conclusions: Changes in metabolic levels of tumoral NAA/Cho and Lac/NAA can serve as early biomarkers for predicting treatment failure to anti-angiogenic therapy as soon as 1 day after bevacizumab-based therapy. The addition of MRS to conventional MR methods can provide better insight into how anti-angiogenic therapy affects tumor microenvironment and predict patient outcomes.
View Article and Find Full Text PDF

Download full-text PDF

Source
http://dx.doi.org/10.1093/noajnl/vdab060DOI Listing
http://www.ncbi.nlm.nih.gov/pmc/articles/PMC8193903PMC
April 2021

Automated tracking of emergency department abdominal CT findings during the COVID-19 pandemic using natural language processing.

Am J Emerg Med 2021 Nov 27;49:52-57. Epub 2021 May 27.

Department of Radiology, Massachusetts General Hospital, Boston, MA, United States of America; Division of Emergency Imaging, Department of Radiology, Massachusetts General Hospital, Boston, MA, United States of America; Medically Engineered Solutions in Healthcare Incubator, Innovation in Operations Research Center (MESH IO), Massachusetts General Hospital, Boston, MA, United States of America; Harvard Medical School, Boston, MA, United States of America. Electronic address:

Purpose: During the COVID-19 pandemic, emergency department (ED) volumes have fluctuated. We hypothesized that natural language processing (NLP) models could quantify changes in detection of acute abdominal pathology (acute appendicitis (AA), acute diverticulitis (AD), or bowel obstruction (BO)) on CT reports.

Methods: This retrospective study included 22,182 radiology reports from CT abdomen/pelvis studies performed at an urban ED between January 1, 2018 to August 14, 2020. Using a subset of 2448 manually annotated reports, we trained random forest NLP models to classify the presence of AA, AD, and BO in report impressions. Performance was assessed using 5-fold cross validation. The NLP classifiers were then applied to all reports.

Results: The NLP classifiers for AA, AD, and BO demonstrated cross-validation classification accuracies between 0.97 and 0.99 and F1-scores between 0.86 and 0.91. When applied to all CT reports, the estimated numbers of AA, AD, and BO cases decreased 43-57% in April 2020 (first regional peak of COVID-19 cases) compared to 2018-2019. However, the number of abdominal pathologies detected rebounded in May-July 2020, with increases above historical averages for AD. The proportions of CT studies with these pathologies did not significantly increase during the pandemic period.

Conclusion: Dramatic decreases in numbers of acute abdominal pathologies detected by ED CT studies were observed early on during the COVID-19 pandemic, though these numbers rapidly rebounded. The proportions of CT cases with these pathologies did not increase, which suggests patients deferred care during the first pandemic peak. NLP can help automatically track findings in ED radiology reporting.
View Article and Find Full Text PDF

Download full-text PDF

Source
http://dx.doi.org/10.1016/j.ajem.2021.05.057DOI Listing
http://www.ncbi.nlm.nih.gov/pmc/articles/PMC8154187PMC
November 2021

Artificial intelligence applied to musculoskeletal oncology: a systematic review.

Skeletal Radiol 2021 May 19. Epub 2021 May 19.

Division of Musculoskeletal Imaging and Intervention, Department of Radiology, Massachusetts General Hospital, Harvard Medical School, Boston, MA, USA.

Developments in artificial intelligence have the potential to improve the care of patients with musculoskeletal tumors. We performed a systematic review of the published scientific literature to identify the current state of the art of artificial intelligence applied to musculoskeletal oncology, including both primary and metastatic tumors, and across the radiology, nuclear medicine, pathology, clinical research, and molecular biology literature. Through this search, we identified 252 primary research articles, of which 58 used deep learning and 194 used other machine learning techniques. Articles involving deep learning have mostly involved bone scintigraphy, histopathology, and radiologic imaging. Articles involving other machine learning techniques have mostly involved transcriptomic analyses, radiomics, and clinical outcome prediction models using medical records. These articles predominantly present proof-of-concept work, other than the automated bone scan index for bone metastasis quantification, which has translated to clinical workflows in some regions. We systematically review and discuss this literature, highlight opportunities for multidisciplinary collaboration, and identify potentially clinically useful topics with a relative paucity of research attention. Musculoskeletal oncology is an inherently multidisciplinary field, and future research will need to integrate and synthesize noisy siloed data from across clinical, imaging, and molecular datasets. Building the data infrastructure for collaboration will help to accelerate progress towards making artificial intelligence truly useful in musculoskeletal oncology.
View Article and Find Full Text PDF

Download full-text PDF

Source
http://dx.doi.org/10.1007/s00256-021-03820-wDOI Listing
May 2021

SPIE-AAPM-NCI BreastPathQ challenge: an image analysis challenge for quantitative tumor cellularity assessment in breast cancer histology images following neoadjuvant treatment.

J Med Imaging (Bellingham) 2021 May 8;8(3):034501. Epub 2021 May 8.

University of Toronto, Medical Biophysics, Toronto, Ontario, Canada.

: The breast pathology quantitative biomarkers (BreastPathQ) challenge was a grand challenge organized jointly by the International Society for Optics and Photonics (SPIE), the American Association of Physicists in Medicine (AAPM), the U.S. National Cancer Institute (NCI), and the U.S. Food and Drug Administration (FDA). The task of the BreastPathQ challenge was computerized estimation of tumor cellularity (TC) in breast cancer histology images following neoadjuvant treatment. : A total of 39 teams developed, validated, and tested their TC estimation algorithms during the challenge. The training, validation, and testing sets consisted of 2394, 185, and 1119 image patches originating from 63, 6, and 27 scanned pathology slides from 33, 4, and 18 patients, respectively. The summary performance metric used for comparing and ranking algorithms was the average prediction probability concordance (PK) using scores from two pathologists as the TC reference standard. : Test PK performance ranged from 0.497 to 0.941 across the 100 submitted algorithms. The submitted algorithms generally performed well in estimating TC, with high-performing algorithms obtaining comparable results to the average interrater PK of 0.927 from the two pathologists providing the reference TC scores. : The SPIE-AAPM-NCI BreastPathQ challenge was a success, indicating that artificial intelligence/machine learning algorithms may be able to approach human performance for cellularity assessment and may have some utility in clinical practice for improving efficiency and reducing reader variability. The BreastPathQ challenge can be accessed on the Grand Challenge website.
View Article and Find Full Text PDF

Download full-text PDF

Source
http://dx.doi.org/10.1117/1.JMI.8.3.034501DOI Listing
http://www.ncbi.nlm.nih.gov/pmc/articles/PMC8107263PMC
May 2021

The RSNA Pulmonary Embolism CT Dataset.

Radiol Artif Intell 2021 Mar 20;3(2):e200254. Epub 2021 Jan 20.

Department of Medical Imaging, Unity Health Toronto, University of Toronto, 30 Bond St, Toronto, ON, Canada M5B 1W8 (E.C.); Department of Diagnostic Imaging, Universidade Federal de São Paulo, São Paulo, Brazil (F.C.K.); Diagnósticos da América SA (Dasa) (F.C.K.); Department of Radiology, University of Kentucky, Lexington, Ky (S.B.H.); Department of Diagnostic Radiology, University of Texas MD Anderson Cancer Center, Houston, Tex (C.C.W.); Department of Radiology, Stanford University, Stanford, Calif (M.P.L., S.S.H.); Department of Radiology, The Ohio State University, Columbus, Ohio (L.M.P.); Department of Radiology and Athinoula A. Martinos Center for Biomedical Imaging, Massachusetts General Hospital, Charlestown, Mass (J.K.); The Jackson Laboratory, Bar Harbor, Maine (R.L.B.); Department of Radiology, Weill Cornell Medical College, New York, NY (G.S.); MD.ai, New York, NY (A.S.); Department of Radiology, Koc University School of Medicine, Istanbul, Turkey (E.A.); Department of Radiology and Nuclear Medicine, Alfred Health, Monash University, Melbourne, Australia (M.L.); Department of Radiodiagnosis, Fortis Escorts Heart Institute, New Delhi, India (P.K.); Department of Diagnostic Radiology and Nuclear Medicine, Faculty of Medicine, University of Jordan, Amman, Jordan (K.A.M.); Department of Departamento de Imagenología, Hospital Regional de Alta Especialidad de la Península de Yucatán, Mérida, Mexico (D.C.N.R.); Department of Radiology, University of Pittsburgh Medical Center, Pittsburgh, Pa (J.W.S.); Department of Radiology, Cooper University Hospital, Camden, NJ (P. Germaine); A Coruña University Hospital, A Coruña, Spain (E.C.L.); Swiss Medical Group, Buenos Aires, Argentina (T.A.); Inland Imaging, Spokane, Wash (P. Gupta); AMRI Hospitals, Kolkata, India (M.J.); Department of Radiology, University of Texas Southwestern Medical Center, Dallas, Tex (F.U.K.); Department of Radiology, Johns Hopkins University School of Medicine, Baltimore, Md (C.T.L.); Department of Radiology and Imaging Sciences, Tata Medical Center, Kolkata, India (S.S.); Department of Radiology, University of New Mexico, Albuquerque, NM (J.W.R.); Department of Radiology, Universitair Ziekenhuis Brussel, Jette, Belgium (C.C.B.); Department of Radiology and Biomedical Imaging, University of California-San Francisco, San Francisco, Calif (J.M).

View Article and Find Full Text PDF

Download full-text PDF

Source
http://dx.doi.org/10.1148/ryai.2021200254DOI Listing
http://www.ncbi.nlm.nih.gov/pmc/articles/PMC8043364PMC
March 2021

Construction of a Machine Learning Dataset through Collaboration: The RSNA 2019 Brain CT Hemorrhage Challenge.

Radiol Artif Intell 2020 May 29;2(3):e190211. Epub 2020 Apr 29.

Department of Radiology/Division of Neuroradiology, Thomas Jefferson University Hospital, 132 S Tenth St, Suite 1080B Main Building, Philadelphia, PA 19107 (A.E.F.); Department of Radiology, The Ohio State University, Columbus, Ohio (L.M.P.); Department of Radiology, Weill Cornell Medical College, New York, NY (G.S.); Department of Radiology, Stanford University, Stanford, Calif (S.S.H.); Department of Radiology and Athinoula A. Martinos Center for Biomedical Imaging, Massachusetts General Hospital, Charlestown, Mass (J.K.); Quantitative Sciences Unit, Stanford University, Stanford, Calif (R.B.); Department of Radiology and Biomedical Imaging, University of California San Francisco, San Francisco, Calif (J.T.M.); MD.ai, New York, NY (A.S.); Department of Diagnostic Imaging, Universidade Federal de São Paulo, São Paulo, Brazil (F.C.K.); Department of Radiology, Stanford University, Stanford, Calif (M.P.L.); Department of Radiology, University of Alabama at Birmingham, Birmingham, Ala (G.C.); Faculty of Health and Medical Sciences, University of Western Australia, Perth, Australia (L. Cala); Advanced Diagnostic Imaging, Clínica DAPI, Curitiba, Brazil (L. Coelho); Department of Radiology, University of Washington, Seattle, Wash (M.M.); Department of Radiology, Baylor College of Medicine, Houston, Tex (F.M., C.L.); Department of Radiology, University of Ottawa, Ottawa, Canada (E.M.); Department of Radiology & Biomedical Imaging, Yale University, New Haven, Conn (I.I., V.Z.); Department of Medical Imaging, Gold Coast University Hospital, Southport, Australia (O.M.); Department of Neuroradiology, University of Utah Health Sciences Center, Salt Lake City, Utah (L.S.); Department of Radiology and Medical Imaging, University of Virginia Health, Charlottesville, Va (D.J.); Division of Neuroradiology, University of Texas Southwestern Medical Center, Dallas, Tex (A.A.); Department of Radiology, Albert Einstein Healthcare Network, Philadelphia, Pa (R.K.L.); and Department of Radiology, SUNY Downstate Medical Center, Albany, NY (J.N.).

This dataset is composed of annotations of the five hemorrhage subtypes (subarachnoid, intraventricular, subdural, epidural, and intraparenchymal hemorrhage) typically encountered at brain CT.
View Article and Find Full Text PDF

Download full-text PDF

Source
http://dx.doi.org/10.1148/ryai.2020190211DOI Listing
http://www.ncbi.nlm.nih.gov/pmc/articles/PMC8082297PMC
May 2020

Diagnosability of Synthetic Retinal Fundus Images for Plus Disease Detection in Retinopathy of Prematurity.

AMIA Annu Symp Proc 2020 25;2020:329-337. Epub 2021 Jan 25.

Medical Informatics & Clinical Epidemiology.

Advances in generative adversarial networks have allowed for engineering of highly-realistic images. Many studies have applied these techniques to medical images. However, evaluation of generated medical images often relies upon image quality and reconstruction metrics, and subjective evaluation by laypersons. This is acceptable for generation of images depicting everyday objects, but not for medical images, where there may be subtle features experts rely upon for diagnosis. We implemented the pix2pix generative adversarial network for retinal fundus image generation, and evaluated the ability of experts to identify generated images as such and to form accurate diagnoses of plus disease in retinopathy of prematurity. We found that, while experts could discern between real and generated images, the diagnoses between image sets were similar. By directly evaluating and confirming physicians' abilities to diagnose generated retinal fundus images, this work supports conclusions that generated images may be viable for dataset augmentation and physician training.
View Article and Find Full Text PDF

Download full-text PDF

Source
http://www.ncbi.nlm.nih.gov/pmc/articles/PMC8075515PMC
June 2021

Automated Assessment and Tracking of COVID-19 Pulmonary Disease Severity on Chest Radiographs using Convolutional Siamese Neural Networks.

Radiol Artif Intell 2020 Jul 22;2(4):e200079. Epub 2020 Jul 22.

Athinoula A. Martinos Center for Biomedical Imaging (M.D.L., N.T.A., M.G., K.C., P.S., J.K.C.), Department of Radiology (F.D., M.L.), Division of Thoracic Imaging and Intervention (B.P.L, D.P.M.), Division of Abdominal Imaging (S.I.L., A.O., A.P.), and MGH and BWH Center for Clinical Data Science (J.K.) of the Massachusetts General Hospital, Harvard Medical School, Boston, MA, USA.

Purpose: To develop an automated measure of COVID-19 pulmonary disease severity on chest radiographs (CXRs), for longitudinal disease tracking and outcome prediction.

Materials And Methods: A convolutional Siamese neural network-based algorithm was trained to output a measure of pulmonary disease severity on CXRs (pulmonary x-ray severity (PXS) score), using weakly-supervised pretraining on ∼160,000 anterior-posterior images from CheXpert and transfer learning on 314 frontal CXRs from COVID-19 patients. The algorithm was evaluated on internal and external test sets from different hospitals (154 and 113 CXRs respectively). PXS scores were correlated with radiographic severity scores independently assigned by two thoracic radiologists and one in-training radiologist (Pearson r). For 92 internal test set patients with follow-up CXRs, PXS score change was compared to radiologist assessments of change (Spearman ρ). The association between PXS score and subsequent intubation or death was assessed. Bootstrap 95% confidence intervals (CI) were calculated.

Results: PXS scores correlated with radiographic pulmonary disease severity scores assigned to CXRs in the internal and external test sets (r=0.86 (95%CI 0.80-0.90) and r=0.86 (95%CI 0.79-0.90) respectively). The direction of change in PXS score in follow-up CXRs agreed with radiologist assessment (ρ=0.74 (95%CI 0.63-0.81)). In patients not intubated on the admission CXR, the PXS score predicted subsequent intubation or death within three days of hospital admission (area under the receiver operating characteristic curve=0.80 (95%CI 0.75-0.85)).

Conclusion: A Siamese neural network-based severity score automatically measures radiographic COVID-19 pulmonary disease severity, which can be used to track disease change and predict subsequent intubation or death.
View Article and Find Full Text PDF

Download full-text PDF

Source
http://dx.doi.org/10.1148/ryai.2020200079DOI Listing
http://www.ncbi.nlm.nih.gov/pmc/articles/PMC7392327PMC
July 2020

Prolonged Intubation in Patients With Prior Cerebrovascular Disease and COVID-19.

Front Neurol 2021 9;12:642912. Epub 2021 Apr 9.

Department of Neurology, Massachusetts General Hospital, Boston, MA, United States.

Patients with comorbidities are at increased risk for poor outcomes in COVID-19, yet data on patients with prior neurological disease remains limited. Our objective was to determine the odds of critical illness and duration of mechanical ventilation in patients with prior cerebrovascular disease and COVID-19. A observational study of 1,128 consecutive adult patients admitted to an academic center in Boston, Massachusetts, and diagnosed with laboratory-confirmed COVID-19. We tested the association between prior cerebrovascular disease and critical illness, defined as mechanical ventilation (MV) or death by day 28, using logistic regression with inverse probability weighting of the propensity score. Among intubated patients, we estimated the cumulative incidence of successful extubation without death over 45 days using competing risk analysis. Of the 1,128 adults with COVID-19, 350 (36%) were critically ill by day 28. The median age of patients was 59 years (SD: 18 years) and 640 (57%) were men. As of June 2nd, 2020, 127 (11%) patients had died. A total of 177 patients (16%) had a prior cerebrovascular disease. Prior cerebrovascular disease was significantly associated with critical illness (OR = 1.54, 95% CI = 1.14-2.07), lower rate of successful extubation (cause-specific HR = 0.57, 95% CI = 0.33-0.98), and increased duration of intubation (restricted mean time difference = 4.02 days, 95% CI = 0.34-10.92) compared to patients without cerebrovascular disease. Prior cerebrovascular disease adversely affects COVID-19 outcomes in hospitalized patients. Further study is required to determine if this subpopulation requires closer monitoring for disease progression during COVID-19.
View Article and Find Full Text PDF

Download full-text PDF

Source
http://dx.doi.org/10.3389/fneur.2021.642912DOI Listing
http://www.ncbi.nlm.nih.gov/pmc/articles/PMC8062773PMC
April 2021

Radiomics Repeatability Pitfalls in a Scan-Rescan MRI Study of Glioblastoma.

Radiol Artif Intell 2021 Jan 16;3(1):e190199. Epub 2020 Dec 16.

Athinoula A. Martinos Center for Biomedical Imaging, Department of Radiology (K.V.H., J.B.P., A.L.B., K.C., P.S., J.M.B., M.C.P., B.R.R., J.K.C.), and Stephen E. and Catherine Pappas Center for Neuro-Oncology (T.T.B., E.R.G.), Massachusetts General Hospital, 149 13th St, Charlestown, MA 02129; and Harvard-MIT Division of Health Sciences and Technology, Cambridge, Mass (K.V.H., J.B.P., K.C.).

Purpose: To determine the influence of preprocessing on the repeatability and redundancy of radiomics features extracted using a popular open-source radiomics software package in a scan-rescan glioblastoma MRI study.

Materials And Methods: In this study, a secondary analysis of T2-weighted fluid-attenuated inversion recovery (FLAIR) and T1-weighted postcontrast images from 48 patients (mean age, 56 years [range, 22-77 years]) diagnosed with glioblastoma were included from two prospective studies (ClinicalTrials.gov NCT00662506 [2009-2011] and NCT00756106 [2008-2011]). All patients underwent two baseline scans 2-6 days apart using identical imaging protocols on 3-T MRI systems. No treatment occurred between scan and rescan, and tumors were essentially unchanged visually. Radiomic features were extracted by using PyRadiomics https://pyradiomics.readthedocs.io/ under varying conditions, including normalization strategies and intensity quantization. Subsequently, intraclass correlation coefficients were determined between feature values of the scan and rescan.

Results: Shape features showed a higher repeatability than intensity (adjusted < .001) and texture features (adjusted < .001) for both T2-weighted FLAIR and T1-weighted postcontrast images. Normalization improved the overlap between the region of interest intensity histograms of scan and rescan (adjusted < .001 for both T2-weighted FLAIR and T1-weighted postcontrast images), except in scans where brain extraction fails. As such, normalization significantly improves the repeatability of intensity features from T2-weighted FLAIR scans (adjusted = .003 [ score normalization] and adjusted = .002 [histogram matching]). The use of a relative intensity binning strategy as opposed to default absolute intensity binning reduces correlation between gray-level co-occurrence matrix features after normalization.

Conclusion: Both normalization and intensity quantization have an effect on the level of repeatability and redundancy of features, emphasizing the importance of both accurate reporting of methodology in radiomics articles and understanding the limitations of choices made in pipeline design. © RSNA, 2020See also the commentary by Tiwari and Verma in this issue.
View Article and Find Full Text PDF

Download full-text PDF

Source
http://dx.doi.org/10.1148/ryai.2020190199DOI Listing
http://www.ncbi.nlm.nih.gov/pmc/articles/PMC7845781PMC
January 2021

Right Ventricular Strain Is Common in Intubated COVID-19 Patients and Does Not Reflect Severity of Respiratory Illness.

J Intensive Care Med 2021 Aug 30;36(8):900-909. Epub 2021 Mar 30.

Department of Anesthesia, Critical Care, and Pain Medicine, 2348Massachusetts General Hospital, Boston, MA, USA.

Background: Right ventricular (RV) dysfunction is common and associated with worse outcomes in patients with coronavirus disease 2019 (COVID-19). In non-COVID-19 acute respiratory distress syndrome, RV dysfunction develops due to pulmonary hypoxic vasoconstriction, inflammation, and alveolar overdistension or atelectasis. Although similar pathogenic mechanisms may induce RV dysfunction in COVID-19, other COVID-19-specific pathology, such as pulmonary endothelialitis, thrombosis, or myocarditis, may also affect RV function. We quantified RV dysfunction by echocardiographic strain analysis and investigated its correlation with disease severity, ventilatory parameters, biomarkers, and imaging findings in critically ill COVID-19 patients.

Methods: We determined RV free wall longitudinal strain (FWLS) in 32 patients receiving mechanical ventilation for COVID-19-associated respiratory failure. Demographics, comorbid conditions, ventilatory parameters, medications, and laboratory findings were extracted from the medical record. Chest imaging was assessed to determine the severity of lung disease and the presence of pulmonary embolism.

Results: Abnormal FWLS was present in 66% of mechanically ventilated COVID-19 patients and was associated with higher lung compliance (39.6 vs 29.4 mL/cmHO, = 0.016), lower airway plateau pressures (21 vs 24 cmHO, = 0.043), lower tidal volume ventilation (5.74 vs 6.17 cc/kg, = 0.031), and reduced left ventricular function. FWLS correlated negatively with age (r = -0.414, = 0.018) and with serum troponin (r = 0.402, = 0.034). Patients with abnormal RV strain did not exhibit decreased oxygenation or increased disease severity based on inflammatory markers, vasopressor requirements, or chest imaging findings.

Conclusions: RV dysfunction is common among critically ill COVID-19 patients and is not related to abnormal lung mechanics or ventilatory pressures. Instead, patients with abnormal FWLS had more favorable lung compliance. RV dysfunction may be secondary to diffuse intravascular micro- and macro-thrombosis or direct myocardial damage.

Trial Registration: National Institutes of Health #NCT04306393. Registered 10 March 2020, https://clinicaltrials.gov/ct2/show/NCT04306393.
View Article and Find Full Text PDF

Download full-text PDF

Source
http://dx.doi.org/10.1177/08850666211006335DOI Listing
http://www.ncbi.nlm.nih.gov/pmc/articles/PMC8267080PMC
August 2021

Applications of Artificial Intelligence for Retinopathy of Prematurity Screening.

Pediatrics 2021 03;147(3)

Athinoula A. Martinos Center for Biomedical Imaging and Department of Radiology, Massachusetts General Hospital, Charlestown, Massachusetts.

Objectives: Childhood blindness from retinopathy of prematurity (ROP) is increasing as a result of improvements in neonatal care worldwide. We evaluate the effectiveness of artificial intelligence (AI)-based screening in an Indian ROP telemedicine program and whether differences in ROP severity between neonatal care units (NCUs) identified by using AI are related to differences in oxygen-titrating capability.

Methods: External validation study of an existing AI-based quantitative severity scale for ROP on a data set of images from the Retinopathy of Prematurity Eradication Save Our Sight ROP telemedicine program in India. All images were assigned an ROP severity score (1-9) by using the Imaging and Informatics in Retinopathy of Prematurity Deep Learning system. We calculated the area under the receiver operating characteristic curve and sensitivity and specificity for treatment-requiring retinopathy of prematurity. Using multivariable linear regression, we evaluated the mean and median ROP severity in each NCU as a function of mean birth weight, gestational age, and the presence of oxygen blenders and pulse oxygenation monitors.

Results: The area under the receiver operating characteristic curve for detection of treatment-requiring retinopathy of prematurity was 0.98, with 100% sensitivity and 78% specificity. We found higher median (interquartile range) ROP severity in NCUs without oxygen blenders and pulse oxygenation monitors, most apparent in bigger infants (>1500 g and 31 weeks' gestation: 2.7 [2.5-3.0] vs 3.1 [2.4-3.8]; = .007, with adjustment for birth weight and gestational age).

Conclusions: Integration of AI into ROP screening programs may lead to improved access to care for secondary prevention of ROP and may facilitate assessment of disease epidemiology and NCU resources.
View Article and Find Full Text PDF

Download full-text PDF

Source
http://dx.doi.org/10.1542/peds.2020-016618DOI Listing
http://www.ncbi.nlm.nih.gov/pmc/articles/PMC7924138PMC
March 2021

Deep learning models for COVID-19 chest x-ray classification: Preventing shortcut learning using feature disentanglement.

medRxiv 2021 Feb 13. Epub 2021 Feb 13.

In response to the COVID-19 global pandemic, recent research has proposed creating deep learning based models that use chest radiographs (CXRs) in a variety of clinical tasks to help manage the crisis. However, the size of existing datasets of CXRs from COVID-19+ patients are relatively small, and researchers often pool CXR data from multiple sources, for example, using different x-ray machines in various patient populations under different clinical scenarios. Deep learning models trained on such datasets have been shown to overfit to erroneous features instead of learning pulmonary characteristics -- a phenomenon known as shortcut learning. We propose adding feature disentanglement to the training process, forcing the models to identify pulmonary features from the images while penalizing them for learning features that can discriminate between the original datasets that the images come from. We find that models trained in this way indeed have better generalization performance on unseen data; in the best case we found that it improved AUC by 0.13 on held out data. We further find that this outperforms masking out non-lung parts of the CXRs and performing histogram equalization, both of which are recently proposed methods for removing biases in CXR datasets.
View Article and Find Full Text PDF

Download full-text PDF

Source
http://dx.doi.org/10.1101/2021.02.11.20196766DOI Listing
http://www.ncbi.nlm.nih.gov/pmc/articles/PMC7885941PMC
February 2021

Automated Radiology-Arthroscopy Correlation of Knee Meniscal Tears Using Natural Language Processing Algorithms.

Acad Radiol 2021 Feb 11. Epub 2021 Feb 11.

Division of Musculoskeletal Imaging and Intervention, Department of Radiology, Massachusetts General Hospital, Harvard Medical School, Boston, Massachusetts.

Rationale And Objectives: Train and apply natural language processing (NLP) algorithms for automated radiology-arthroscopy correlation of meniscal tears.

Materials And Methods: In this retrospective single-institution study, we trained supervised machine learning models (logistic regression, support vector machine, and random forest) to detect medial or lateral meniscus tears on free-text MRI reports. We trained and evaluated model performances with cross-validation using 3593 manually annotated knee MRI reports. To assess radiology-arthroscopy correlation, we then randomly partitioned this dataset 80:20 for training and testing, where 108 test set MRIs were followed by knee arthroscopy within 1 year. These free-text arthroscopy reports were also manually annotated. The NLP algorithms trained on the knee MRI training dataset were then evaluated on the MRI and arthroscopy report test datasets. We assessed radiology-arthroscopy agreement using the ensembled NLP-extracted findings versus manually annotated findings.

Results: The NLP models showed high cross-validation performance for meniscal tear detection on knee MRI reports (medial meniscus F1 scores 0.93-0.94, lateral meniscus F1 scores 0.86-0.88). When these algorithms were evaluated on arthroscopy reports, despite never training on arthroscopy reports, performance was similar, though higher with model ensembling (medial meniscus F1 score 0.97, lateral meniscus F1 score 0.99). However, ensembling did not improve performance on knee MRI reports. In the radiology-arthroscopy test set, the ensembled NLP models were able to detect mismatches between MRI and arthroscopy reports with sensitivity 79% and specificity 87%.

Conclusion: Radiology-arthroscopy correlation can be automated for knee meniscal tears using NLP algorithms, which shows promise for education and quality improvement.
View Article and Find Full Text PDF

Download full-text PDF

Source
http://dx.doi.org/10.1016/j.acra.2021.01.017DOI Listing
http://www.ncbi.nlm.nih.gov/pmc/articles/PMC8355247PMC
February 2021

Deep Learning for the Diagnosis of Stage in Retinopathy of Prematurity: Accuracy and Generalizability across Populations and Cameras.

Ophthalmol Retina 2021 10 6;5(10):1027-1035. Epub 2021 Feb 6.

Department of Ophthalmology, Casey Eye Institute, Oregon Health & Science University, Portland, Oregon. Electronic address:

Purpose: Stage is an important feature to identify in retinal images of infants at risk of retinopathy of prematurity (ROP). The purpose of this study was to implement a convolutional neural network (CNN) for binary detection of stages 1, 2, and 3 in ROP and to evaluate its generalizability across different populations and camera systems.

Design: Diagnostic validation study of CNN for stage detection.

Participants: Retinal fundus images obtained from preterm infants during routine ROP screenings.

Methods: Two datasets were used: 5943 fundus images obtained by RetCam camera (Natus Medical, Pleasanton, CA) from 9 North American institutions and 5049 images obtained by 3nethra camera (Forus Health Incorporated, Bengaluru, India) from 4 hospitals in Nepal. Images were labeled based on the presence of stage by 1 to 3 expert graders. Three CNN models were trained using 5-fold cross-validation on datasets from North America alone, Nepal alone, and a combined dataset and were evaluated on 2 held-out test sets consisting of 708 and 247 images from the Nepali and North American datasets, respectively.

Main Outcome Measures: Convolutional neural network performance was evaluated using area under the receiver operating characteristic curve (AUROC), area under the precision-recall curve (AUPRC), sensitivity, and specificity.

Results: Both the North American- and Nepali-trained models demonstrated high performance on a test set from the same population: AUROC, 0.99; AUPRC, 0.98; sensitivity, 94%; and AUROC, 0.97; AUPRC, 0.91; and sensitivity, 73%; respectively. However, the performance of each model decreased to AUROC of 0.96 and AUPRC of 0.88 (sensitivity, 52%) and AUROC of 0.62 and AUPRC of 0.36 (sensitivity, 44%) when evaluated on a test set from the other population. Compared with the models trained on individual datasets, the model trained on a combined dataset achieved improved performance on each respective test set: sensitivity improved from 94% to 98% on the North American test set and from 73% to 82% on the Nepali test set.

Conclusions: A CNN can identify accurately the presence of ROP stage in retinal images, but performance depends on the similarity between training and testing populations. We demonstrated that internal and external performance can be improved by increasing the heterogeneity of the training dataset features of the training dataset, in this case by combining images from different populations and cameras.
View Article and Find Full Text PDF

Download full-text PDF

Source
http://dx.doi.org/10.1016/j.oret.2020.12.013DOI Listing
http://www.ncbi.nlm.nih.gov/pmc/articles/PMC8364291PMC
October 2021

Multi-Radiologist User Study for Artificial Intelligence-Guided Grading of COVID-19 Lung Disease Severity on Chest Radiographs.

Acad Radiol 2021 04 18;28(4):572-576. Epub 2021 Jan 18.

Athinoula A. Martinos Center for Biomedical Imaging, Department of Radiology, Massachusetts General Hospital, Harvard Medical School, Boston, Massachusetts.. Electronic address:

Rationale And Objectives: Radiographic findings of COVID-19 pneumonia can be used for patient risk stratification; however, radiologist reporting of disease severity is inconsistent on chest radiographs (CXRs). We aimed to see if an artificial intelligence (AI) system could help improve radiologist interrater agreement.

Materials And Methods: We performed a retrospective multi-radiologist user study to evaluate the impact of an AI system, the PXS score model, on the grading of categorical COVID-19 lung disease severity on 154 chest radiographs into four ordinal grades (normal/minimal, mild, moderate, and severe). Four radiologists (two thoracic and two emergency radiologists) independently interpreted 154 CXRs from 154 unique patients with COVID-19 hospitalized at a large academic center, before and after using the AI system (median washout time interval was 16 days). Three different thoracic radiologists assessed the same 154 CXRs using an updated version of the AI system trained on more imaging data. Radiologist interrater agreement was evaluated using Cohen and Fleiss kappa where appropriate. The lung disease severity categories were associated with clinical outcomes using a previously published outcomes dataset using Fisher's exact test and Chi-square test for trend.

Results: Use of the AI system improved radiologist interrater agreement (Fleiss κ = 0.40 to 0.66, before and after use of the system). The Fleiss κ for three radiologists using the updated AI system was 0.74. Severity categories were significantly associated with subsequent intubation or death within 3 days.

Conclusion: An AI system used at the time of CXR study interpretation can improve the interrater agreement of radiologists.
View Article and Find Full Text PDF

Download full-text PDF

Source
http://dx.doi.org/10.1016/j.acra.2021.01.016DOI Listing
http://www.ncbi.nlm.nih.gov/pmc/articles/PMC7813473PMC
April 2021

Quantitative tumor heterogeneity MRI profiling improves machine learning-based prognostication in patients with metastatic colon cancer.

Eur Radiol 2021 Aug 16;31(8):5759-5767. Epub 2021 Jan 16.

Department of Radiology, Massachusetts General Hospital, Harvard Medical School, 55 Fruit Street, GRB #290, Boston, MA, 02114, USA.

Objectives: Intra-tumor heterogeneity has been previously shown to be an independent predictor of patient survival. The goal of this study is to assess the role of quantitative MRI-based measures of intra-tumor heterogeneity as predictors of survival in patients with metastatic colorectal cancer.

Methods: In this IRB-approved retrospective study, we identified 55 patients with stage 4 colon cancer with known hepatic metastasis on MRI. Ninety-four metastatic hepatic lesions were identified on post-contrast images and manually volumetrically segmented. A heterogeneity phenotype vector was extracted from each lesion. Univariate regression analysis was used to assess the contribution of 110 extracted features to survival prediction. A random forest-based machine learning technique was applied to the feature vector and to the standard prognostic clinical and pathologic variables. The dataset was divided into a training and test set at a ratio of 4:1. ROC analysis and confusion matrix analysis were used to assess classification performance.

Results: Mean survival time was 39 ± 3.9 months for the study population. A total of 22 texture features were associated with patient survival (p < 0.05). The trained random forest machine learning model that included standard clinical and pathological prognostic variables resulted in an area under the ROC curve of 0.83. A model that adds imaging-based heterogeneity features to the clinical and pathological variables resulted in improved model performance for survival prediction with an AUC of 0.94.

Conclusions: MRI-based texture features are associated with patient outcomes and improve the performance of standard clinical and pathological variables for predicting patient survival in metastatic colorectal cancer.

Key Points: • MRI-based tumor heterogeneity texture features are associated with patient survival outcomes. • MRI-based tumor texture features complement standard clinical and pathological variables for prognosis prediction in metastatic colorectal cancer. • Agglomerative hierarchical clustering shows that patient survival outcomes are associated with different MRI tumor profiles.
View Article and Find Full Text PDF

Download full-text PDF

Source
http://dx.doi.org/10.1007/s00330-020-07673-0DOI Listing
August 2021

In the Era of Deep Learning, Why Reconstruct an Image at All?

J Am Coll Radiol 2021 Jan;18(1 Pt B):170-173

Chief Technology and Digital Officer, The University of Texas MD Anderson Cancer Center, Houston, Texas.

View Article and Find Full Text PDF

Download full-text PDF

Source
http://dx.doi.org/10.1016/j.jacr.2020.09.050DOI Listing
January 2021

The RSNA International COVID-19 Open Radiology Database (RICORD).

Radiology 2021 04 5;299(1):E204-E213. Epub 2021 Jan 5.

From the Department of Radiology, Stanford University, Stanford, Calif (E.B.T., J.S., B.P.P.); Department of Radiology, University of Pennsylvania Hospital, Philadelphia, Pa (S.S., M. Hershman, L.R.); Department of Radiology, Stanford University School of Medicine, Stanford University Medical Center, 725 Welch Rd, Room 1675, Stanford, CA 94305-5913 (M.P.L.); Department of Medical Imaging, University of Toronto, Unity Health Toronto, Toronto, Canada (E.C.); Department of Radiology, Mayo Clinic, Rochester, Minn (B.J.E., P.R.); Department of Radiology, Weill Cornell Medicine, New York, NY (G.S.); MD.ai, New York, NY (A.S.); Department of Radiology, Athinoula A. Martinos Center for Biomedical Imaging, Massachusetts General Hospital, Harvard Medical School, Charlestown, Mass (J.K.C.); Department of Diagnostic and Interventional Radiology, Cairo University Kasr Alainy Faculty of Medicine, Cairo, Egypt (M. Hafez); Department of Radiology, The Ottawa Hospital, Ottawa, Canada (S.J.); Department of Radiology and Biomedical Imaging, Center for Intelligent Imaging, San Francisco, Calif (J.M.); Department of Radiology, Koç University School of Medicine, Koç University Hospital, Istanbul, Turkey (E.A.); Department of Radiology, ETZ Hospital, Tilburg, the Netherlands (E.R.R.); Department of Radiology, University of Ghent, Ghent, Belgium (E.R.R.); Department of Diagnostic Imaging, Universidade Federal de São Paulo, São Paulo, Brazil (F.C.K.); Department of Radiology, Netherlands Cancer Institute, Amsterdam, the Netherlands (L.T.); Department of Radiology, NYU Grossman School of Medicine, Center for Advanced Imaging Innovation and Research, Laura and Isaac Perlmutter Cancer Center, New York, NY (L.M.); Department of Radiology, University of Wisconsin School of Medicine and Public Health, Madison, Wis (J.P.K.); and Department of Thoracic Imaging, University of Texas MD Anderson Cancer Center, Houston, Tex (C.C.W.).

The coronavirus disease 2019 (COVID-19) pandemic is a global health care emergency. Although reverse-transcription polymerase chain reaction testing is the reference standard method to identify patients with COVID-19 infection, chest radiography and CT play a vital role in the detection and management of these patients. Prediction models for COVID-19 imaging are rapidly being developed to support medical decision making. However, inadequate availability of a diverse annotated data set has limited the performance and generalizability of existing models. To address this unmet need, the RSNA and Society of Thoracic Radiology collaborated to develop the RSNA International COVID-19 Open Radiology Database (RICORD). This database is the first multi-institutional, multinational, expert-annotated COVID-19 imaging data set. It is made freely available to the machine learning community as a research and educational resource for COVID-19 chest imaging. Pixel-level volumetric segmentation with clinical annotations was performed by thoracic radiology subspecialists for all COVID-19-positive thoracic CT scans. The labeling schema was coordinated with other international consensus panels and COVID-19 data annotation efforts, the European Society of Medical Imaging Informatics, the American College of Radiology, and the American Association of Physicists in Medicine. Study-level COVID-19 classification labels for chest radiographs were annotated by three radiologists, with majority vote adjudication by board-certified radiologists. RICORD consists of 240 thoracic CT scans and 1000 chest radiographs contributed from four international sites. It is anticipated that RICORD will ideally lead to prediction models that can demonstrate sustained performance across populations and health care systems.
View Article and Find Full Text PDF

Download full-text PDF

Source
http://dx.doi.org/10.1148/radiol.2021203957DOI Listing
http://www.ncbi.nlm.nih.gov/pmc/articles/PMC7993245PMC
April 2021

Vascular dysfunction promotes regional hypoxia after bevacizumab therapy in recurrent glioblastoma patients.

Neurooncol Adv 2020 Jan-Dec;2(1):vdaa157. Epub 2020 Nov 17.

Department of Neurology, Brigham and Women's Hospital, Boston, Massachusetts, USA.

Background: Hypoxia is a driver of treatment resistance in glioblastoma. Antiangiogenic agents may transiently normalize blood vessels and decrease hypoxia before excessive pruning of vessels increases hypoxia. The time window of normalization is dose and time dependent. We sought to determine how VEGF blockade with bevacizumab modulates tumor vasculature and the impact that those vascular changes have on hypoxia in recurrent glioblastoma patients.

Methods: We measured tumor volume, vascular permeability (Ktrans), perfusion parameters (cerebral blood flow/volume, vessel caliber, and mean transit time), and regions of hypoxia in patients with recurrent glioblastoma before and after treatment with bevacizumab alone or with lomustine using [F]FMISO PET-MRI. We also examined serial changes in plasma biomarkers of angiogenesis and inflammation.

Results: Eleven patients were studied. The magnitude of global tumor hypoxia was variable across these 11 patients prior to treatment and it did not significantly change after bevacizumab. The hypoxic regions had an inefficient vasculature characterized by elevated cerebral blood flow/volume and increased vessel caliber. In a subset of patients, there were tumor subregions with decreased mean transit times and a decrease in hypoxia, suggesting heterogeneous improvement in vascular efficiency. Bevacizumab significantly changed known pharmacodynamic biomarkers such as plasma VEGF and PlGF.

Conclusions: The vascular signature in hypoxic tumor regions indicates a disorganized vasculature which, in most tumors, does not significantly change after bevacizumab treatment. While some tumor regions showed improved vascular efficiency following treatment, bevacizumab did not globally alter hypoxia or normalize tumor vasculature in glioblastoma.
View Article and Find Full Text PDF

Download full-text PDF

Source
http://dx.doi.org/10.1093/noajnl/vdaa157DOI Listing
http://www.ncbi.nlm.nih.gov/pmc/articles/PMC7764510PMC
November 2020
-->