Publications by authors named "M G Belyaev"

8 Publications

Challenges in Building of Deep Learning Models for Glioblastoma Segmentation: Evidence from Clinical Data.

Stud Health Technol Inform 2021 May;281:298-302

N.N. Burdenko National Medical Research Center of Neurosurgery, Moscow, Russia.

In this article, we compare the performance of a state-of-the-art segmentation network (UNet) on two different glioblastoma (GB) segmentation datasets. Our experiments show that the same training procedure yields almost twice as bad results on the retrospective clinical data compared to the BraTS challenge data (in terms of Dice score). We discuss possible reasons for such an outcome, including inter-rater variability and high variability in magnetic resonance imaging (MRI) scanners and scanner settings. The high performance of segmentation models, demonstrated on preselected imaging data, does not bring the community closer to using these algorithms in clinical settings. We believe that a clinically applicable deep learning architecture requires a shift from unified datasets to heterogeneous data.
View Article and Find Full Text PDF

Download full-text PDF

Source
http://dx.doi.org/10.3233/SHTI210168DOI Listing
May 2021

CT-Based COVID-19 triage: Deep multitask learning improves joint identification and severity quantification.

Med Image Anal 2021 07 1;71:102054. Epub 2021 Apr 1.

Skolkovo Institute of Science and Technology, Moscow, Russia. Electronic address:

The current COVID-19 pandemic overloads healthcare systems, including radiology departments. Though several deep learning approaches were developed to assist in CT analysis, nobody considered study triage directly as a computer science problem. We describe two basic setups: Identification of COVID-19 to prioritize studies of potentially infected patients to isolate them as early as possible; Severity quantification to highlight patients with severe COVID-19, thus direct them to a hospital or provide emergency medical care. We formalize these tasks as binary classification and estimation of affected lung percentage. Though similar problems were well-studied separately, we show that existing methods could provide reasonable quality only for one of these setups. We employ a multitask approach to consolidate both triage approaches and propose a convolutional neural network to leverage all available labels within a single model. In contrast with the related multitask approaches, we show the benefit from applying the classification layers to the most spatially detailed feature map at the upper part of U-Net instead of the less detailed latent representation at the bottom. We train our model on approximately 1500 publicly available CT studies and test it on the holdout dataset that consists of 123 chest CT studies of patients drawn from the same healthcare system, specifically 32 COVID-19 and 30 bacterial pneumonia cases, 30 cases with cancerous nodules, and 31 healthy controls. The proposed multitask model outperforms the other approaches and achieves ROC AUC scores of 0.87±0.01 vs. bacterial pneumonia, 0.93±0.01 vs. cancerous nodules, and 0.97±0.01 vs. healthy controls in Identification of COVID-19, and achieves 0.97±0.01 Spearman Correlation in Severity quantification. We have released our code and shared the annotated lesions masks for 32 CT images of patients with COVID-19 from the test dataset.
View Article and Find Full Text PDF

Download full-text PDF

Source
http://dx.doi.org/10.1016/j.media.2021.102054DOI Listing
http://www.ncbi.nlm.nih.gov/pmc/articles/PMC8015379PMC
July 2021

[Artificial intelligence for diagnosis of vertebral compression fractures using a morphometric analysis model, based on convolutional neural networks].

Probl Endokrinol (Mosk) 2020 Oct 24;66(5):48-60. Epub 2020 Oct 24.

Research and Practical Clinical Center for Diagnostics and Telemedicine Technologies.

Background: Pathological low-energy (LE) vertebral compression fractures (VFs) are common complications of osteoporosis and predictors of subsequent LE fractures. In 84% of cases, VFs are not reported on chest CT (CCT), which calls for the development of an artificial intelligence-based (AI) assistant that would help radiology specialists to improve the diagnosis of osteoporosis complications and prevent new LE fractures.

Aims: To develop an AI model for automated diagnosis of compression fractures of the thoracic spine based on chest CT images.

Materials And Methods: Between September 2019 and May 2020 the authors performed a retrospective sampling study of ССТ images. The 160 of results were selected and anonymized. The data was labeled by seven readers. Using the morphometric analysis, the investigators received the following metric data: ventral, medial and dorsal dimensions. This was followed by a semiquantitative assessment of VFs degree. The data was used to develop the Comprise-G AI mode based on CNN, which subsequently measured the size of the vertebral bodies and then calculates the compression degree. The model was evaluated with the ROC curve analysis and by calculating sensitivity and specificity values.

Results: Formed data consist of 160 patients (a training group - 100 patients; a test group - 60 patients). The total of 2,066 vertebrae was annotated. When detecting Grade 2 and 3 maximum VFs in patients the Comprise-G model demonstrated sensitivity - 90,7%, specificity - 90,7%, AUC ROC - 0.974 on the 5-FOLD cross-validation data of the training dataset; on the test data - sensitivity - 83,2%, specificity - 90,0%, AUC ROC - 0.956; in vertebrae demonstrated sensitivity - 91,5%, specificity - 95,2%, AUC ROC - 0.981 on the cross-validation data; for the test data sensitivity - 79,3%, specificity - 98,7%, AUC ROC - 0.978.

Conclusions: The Comprise-G model demonstrated high diagnostic capabilities in detecting the VFs on CCT images and can be recommended for further validation.
View Article and Find Full Text PDF

Download full-text PDF

Source
http://dx.doi.org/10.14341/probl12605DOI Listing
October 2020

[Epicardial fat Tissue Volumetry: Comparison of Semi-Automatic Measurement and the Machine Learning Algorithm].

Kardiologiia 2020 Oct 14;60(9):46-54. Epub 2020 Oct 14.

Research and Practical Clinical Center for Diagnostics and Telemedicine Technologies of the Moscow Health Care Department, Moscow.

Aim        To compare assessments of epicardial adipose tissue (EAT) volumes obtained with a semi-automatic, physician-performed analysis and an automatic analysis using a machine-learning algorithm by data of low-dose (LDCT) and standard computed tomography (CT) of chest organs.Material and methods        This analytical, retrospective, transversal study randomly included 100 patients from a database of a united radiological informational service (URIS). The patients underwent LDCT as a part of the project "Low-dose chest computed tomography as a screening method for detection of lung cancer and other diseases of chest organs" (n=50) and chest CT according to a standard protocol (n=50) in outpatient clinics of Moscow. Each image was read by two radiologists on a Syngo. via VB20 workstation. In addition, each image was evaluated with a developed machine-learning algorithm, which provides a completely automatic measurement of EAT.Results   Comparison of EAT volumes obtained with chest LDCT and CT showed highly consistent results both for the expert-performed semi-automatic analyses (correlation coefficient >98 %) and between the expert layout and the machine-learning algorithm (correlation coefficient >95 %). Time of performing segmentation and volumetry on one image with the machine-learning algorithm was not longer than 40 sec, which was 30 times faster than the quantitative analysis performed by an expert and potentially facilitated quantification of the EAT volume in the clinical conditions.Conclusion            The proposed method of automatic volumetry will expedite the analysis of EAT for predicting the risk of ischemic heart disease.
View Article and Find Full Text PDF

Download full-text PDF

Source
http://dx.doi.org/10.18087/cardio.2020.9.n1111DOI Listing
October 2020

Standardized Assessment of Automatic Segmentation of White Matter Hyperintensities and Results of the WMH Segmentation Challenge.

IEEE Trans Med Imaging 2019 11 19;38(11):2556-2568. Epub 2019 Mar 19.

Quantification of cerebral white matter hyperintensities (WMH) of presumed vascular origin is of key importance in many neurological research studies. Currently, measurements are often still obtained from manual segmentations on brain MR images, which is a laborious procedure. The automatic WMH segmentation methods exist, but a standardized comparison of the performance of such methods is lacking. We organized a scientific challenge, in which developers could evaluate their methods on a standardized multi-center/-scanner image dataset, giving an objective comparison: the WMH Segmentation Challenge. Sixty T1 + FLAIR images from three MR scanners were released with the manual WMH segmentations for training. A test set of 110 images from five MR scanners was used for evaluation. The segmentation methods had to be containerized and submitted to the challenge organizers. Five evaluation metrics were used to rank the methods: 1) Dice similarity coefficient; 2) modified Hausdorff distance (95th percentile); 3) absolute log-transformed volume difference; 4) sensitivity for detecting individual lesions; and 5) F1-score for individual lesions. In addition, the methods were ranked on their inter-scanner robustness; 20 participants submitted their methods for evaluation. This paper provides a detailed analysis of the results. In brief, there is a cluster of four methods that rank significantly better than the other methods, with one clear winner. The inter-scanner robustness ranking shows that not all the methods generalize to unseen scanners. The challenge remains open for future submissions and provides a public platform for method evaluation.
View Article and Find Full Text PDF

Download full-text PDF

Source
http://dx.doi.org/10.1109/TMI.2019.2905770DOI Listing
http://www.ncbi.nlm.nih.gov/pmc/articles/PMC7590957PMC
November 2019
-->