Publications by authors named "Johannes Ulén"

16 Publications

  • Page 1 of 1

Convolutional neural network-based automatic heart segmentation and quantitation in I-metaiodobenzylguanidine SPECT imaging.

EJNMMI Res 2021 Oct 12;11(1):105. Epub 2021 Oct 12.

Department of Nuclear Medicine, Kanazawa University, Kanazawa, Japan.

Background: Since three-dimensional segmentation of cardiac region in I-metaiodobenzylguanidine (MIBG) study has not been established, this study aimed to achieve organ segmentation using a convolutional neural network (CNN) with I-MIBG single photon emission computed tomography (SPECT) imaging, to calculate heart counts and washout rates (WR) automatically and to compare with conventional quantitation based on planar imaging.

Methods: We assessed 48 patients (aged 68.4 ± 11.7 years) with heart and neurological diseases, including chronic heart failure, dementia with Lewy bodies, and Parkinson's disease. All patients were assessed by early and late I-MIBG planar and SPECT imaging. The CNN was initially trained to individually segment the lungs and liver on early and late SPECT images. The segmentation masks were aligned, and then, the CNN was trained to directly segment the heart, and all models were evaluated using fourfold cross-validation. The CNN-based average heart counts and WR were calculated and compared with those determined using planar parameters. The CNN-based SPECT and conventional planar heart counts were corrected by physical time decay, injected dose of I-MIBG, and body weight. We also divided WR into normal and abnormal groups from linear regression lines determined by the relationship between planar WR and CNN-based WR and then analyzed agreement between them.

Results: The CNN segmented the cardiac region in patients with normal and reduced uptake. The CNN-based SPECT heart counts significantly correlated with conventional planar heart counts with and without background correction and a planar heart-to-mediastinum ratio (R = 0.862, 0.827, and 0.729, p < 0.0001, respectively). The CNN-based and planar WRs also correlated with and without background correction and WR based on heart-to-mediastinum ratios of R = 0.584, 0.568 and 0.507, respectively (p < 0.0001). Contingency table findings of high and low WR (cutoffs: 34% and 30% for planar and SPECT studies, respectively) showed 87.2% agreement between CNN-based and planar methods.

Conclusions: The CNN could create segmentation from SPECT images, and average heart counts and WR were reliably calculated three-dimensionally, which might be a novel approach to quantifying SPECT images of innervation.
View Article and Find Full Text PDF

Download full-text PDF

Source
http://dx.doi.org/10.1186/s13550-021-00847-xDOI Listing
October 2021

Deep learning takes the pain out of back breaking work - Automatic vertebral segmentation and attenuation measurement for osteoporosis.

Clin Imaging 2021 Aug 26;81:54-59. Epub 2021 Aug 26.

Göteborg University, SU Sahlgrenska, 413 45 Göteborg, Sweden.

Background: Osteoporosis is an underdiagnosed and undertreated disease worldwide. Recent studies have highlighted the use of simple vertebral trabecular attenuation values for opportunistic osteoporosis screening. Meanwhile, machine learning has been used to accurately segment large parts of the human skeleton.

Purpose: To evaluate a fully automated deep learning-based method for lumbar vertebral segmentation and measurement of vertebral volumetric trabecular attenuation values.

Material And Methods: A deep learning-based method for automated segmentation of bones was retrospectively applied to non-contrast CT scans of 1008 patients (mean age 57 years, 472 female, 536 male). Each vertebral segmentation was automatically reduced by 7 mm in all directions in order to avoid cortical bone. The mean and median volumetric attenuation values from Th12 to L4 were obtained and plotted against patient age and sex. L1 values were further analyzed to facilitate comparison with previous studies.

Results: The mean L1 attenuation values decreased linearly with age by -2.2 HU per year (age > 30, 95% CI: -2.4, -2.0, R = 0.3544). The mean L1 attenuation value of the entire population cohort was 140 HU ± 54.

Conclusions: With results closely matching those of previous studies, we believe that our fully automated deep learning-based method can be used to obtain lumbar volumetric trabecular attenuation values which can be used for opportunistic screening of osteoporosis in patients undergoing CT scans for other reasons.
View Article and Find Full Text PDF

Download full-text PDF

Source
http://dx.doi.org/10.1016/j.clinimag.2021.08.009DOI Listing
August 2021

Artificial intelligence-based measurements of PET/CT imaging biomarkers are associated with disease-specific survival of high-risk prostate cancer patients.

Scand J Urol 2021 Sep 25:1-7. Epub 2021 Sep 25.

Department of Clinical Physiology, Region Västra Götaland, Sahlgrenska University Hospital, Gothenburg, Sweden.

Objective: Artificial intelligence (AI) offers new opportunities for objective quantitative measurements of imaging biomarkers from positron-emission tomography/computed tomography (PET/CT). Clinical image reporting relies predominantly on observer-dependent visual assessment and easily accessible measures like SUV, representing lesion uptake in a relatively small amount of tissue. Our hypothesis is that measurements of total volume and lesion uptake of the entire tumour would better reflect the disease`s activity with prognostic significance, compared with conventional measurements.

Methods: An AI-based algorithm was trained to automatically measure the prostate and its tumour content in PET/CT of 145 patients. The algorithm was then tested retrospectively on 285 high-risk patients, who were examined using F-choline PET/CT for primary staging between April 2008 and July 2015. Prostate tumour volume, tumour fraction of the prostate gland, lesion uptake of the entire tumour, and SUV were obtained automatically. Associations between these measurements, age, PSA, Gleason score and prostate cancer-specific survival were studied, using a Cox proportional-hazards regression model.

Results: Twenty-three patients died of prostate cancer during follow-up (median survival 3.8 years). Total tumour volume of the prostate ( = 0.008), tumour fraction of the gland ( = 0.005), total lesion uptake of the prostate ( = 0.02), and age ( = 0.01) were significantly associated with disease-specific survival, whereas SUV ( = 0.2), PSA ( = 0.2), and Gleason score ( = 0.8) were not.

Conclusion: AI-based assessments of total tumour volume and lesion uptake were significantly associated with disease-specific survival in this patient cohort, whereas SUV and Gleason scores were not. The AI-based approach appears well-suited for clinically relevant patient stratification and monitoring of individual therapy.
View Article and Find Full Text PDF

Download full-text PDF

Source
http://dx.doi.org/10.1080/21681805.2021.1977845DOI Listing
September 2021

Artificial intelligence could alert for focal skeleton/bone marrow uptake in Hodgkin's lymphoma patients staged with FDG-PET/CT.

Sci Rep 2021 05 17;11(1):10382. Epub 2021 May 17.

Clinical Physiology and Nuclear Medicine, Skåne University Hospital, Malmö, Sweden.

To develop an artificial intelligence (AI)-based method for the detection of focal skeleton/bone marrow uptake (BMU) in patients with Hodgkin's lymphoma (HL) undergoing staging with FDG-PET/CT. The results of the AI in a separate test group were compared to the interpretations of independent physicians. The skeleton and bone marrow were segmented using a convolutional neural network. The training of AI was based on 153 un-treated patients. Bone uptake significantly higher than the mean BMU was marked as abnormal, and an index, based on the total squared abnormal uptake, was computed to identify the focal uptake. Patients with an index above a predefined threshold were interpreted as having focal uptake. As the test group, 48 un-treated patients who had undergone a staging FDG-PET/CT between 2017-2018 with biopsy-proven HL were retrospectively included. Ten physicians classified the 48 cases regarding focal skeleton/BMU. The majority of the physicians agreed with the AI in 39/48 cases (81%) regarding focal skeleton/bone marrow involvement. Inter-observer agreement between the physicians was moderate, Kappa 0.51 (range 0.25-0.80). An AI-based method can be developed to highlight suspicious focal skeleton/BMU in HL patients staged with FDG-PET/CT. Inter-observer agreement regarding focal BMU is moderate among nuclear medicine physicians.
View Article and Find Full Text PDF

Download full-text PDF

Source
http://dx.doi.org/10.1038/s41598-021-89656-9DOI Listing
http://www.ncbi.nlm.nih.gov/pmc/articles/PMC8128858PMC
May 2021

AI-based detection of lung lesions in [F]FDG PET-CT from lung cancer patients.

EJNMMI Phys 2021 Mar 25;8(1):32. Epub 2021 Mar 25.

Department of Clinical Physiology, Sahlgrenska University Hospital, Gothenburg, Sweden.

Background: [F]-fluorodeoxyglucose (FDG) positron emission tomography with computed tomography (PET-CT) is a well-established modality in the work-up of patients with suspected or confirmed diagnosis of lung cancer. Recent research efforts have focused on extracting theragnostic and textural information from manually indicated lung lesions. Both semi-automatic and fully automatic use of artificial intelligence (AI) to localise and classify FDG-avid foci has been demonstrated. To fully harness AI's usefulness, we have developed a method which both automatically detects abnormal lung lesions and calculates the total lesion glycolysis (TLG) on FDG PET-CT.

Methods: One hundred twelve patients (59 females and 53 males) who underwent FDG PET-CT due to suspected or for the management of known lung cancer were studied retrospectively. These patients were divided into a training group (59%; n = 66), a validation group (20.5%; n = 23) and a test group (20.5%; n = 23). A nuclear medicine physician manually segmented abnormal lung lesions with increased FDG-uptake in all PET-CT studies. The AI-based method was trained to segment the lesions based on the manual segmentations. TLG was then calculated from manual and AI-based measurements, respectively and analysed with Bland-Altman plots.

Results: The AI-tool's performance in detecting lesions had a sensitivity of 90%. One small lesion was missed in two patients, respectively, where both had a larger lesion which was correctly detected. The positive and negative predictive values were 88% and 100%, respectively. The correlation between manual and AI TLG measurements was strong (R = 0.74). Bias was 42 g and 95% limits of agreement ranged from - 736 to 819 g. Agreement was particularly high in smaller lesions.

Conclusions: The AI-based method is suitable for the detection of lung lesions and automatic calculation of TLG in small- to medium-sized tumours. In a clinical setting, it will have an added value due to its capability to sort out negative examinations resulting in prioritised and focused care on patients with potentially malignant lesions.
View Article and Find Full Text PDF

Download full-text PDF

Source
http://dx.doi.org/10.1186/s40658-021-00376-5DOI Listing
http://www.ncbi.nlm.nih.gov/pmc/articles/PMC7994489PMC
March 2021

Artificial intelligence-aided CT segmentation for body composition analysis: a validation study.

Eur Radiol Exp 2021 Mar 11;5(1):11. Epub 2021 Mar 11.

Region Västra Götaland, Department of Clinical Physiology, Sahlgrenska University Hospital, Gothenburg, Sweden.

Background: Body composition is associated with survival outcome in oncological patients, but it is not routinely calculated. Manual segmentation of subcutaneous adipose tissue (SAT) and muscle is time-consuming and therefore limited to a single CT slice. Our goal was to develop an artificial-intelligence (AI)-based method for automated quantification of three-dimensional SAT and muscle volumes from CT images.

Methods: Ethical approvals from Gothenburg and Lund Universities were obtained. Convolutional neural networks were trained to segment SAT and muscle using manual segmentations on CT images from a training group of 50 patients. The method was applied to a separate test group of 74 cancer patients, who had two CT studies each with a median interval between the studies of 3 days. Manual segmentations in a single CT slice were used for comparison. The accuracy was measured as overlap between the automated and manual segmentations.

Results: The accuracy of the AI method was 0.96 for SAT and 0.94 for muscle. The average differences in volumes were significantly lower than the corresponding differences in areas in a single CT slice: 1.8% versus 5.0% (p < 0.001) for SAT and 1.9% versus 3.9% (p < 0.001) for muscle. The 95% confidence intervals for predicted volumes in an individual subject from the corresponding single CT slice areas were in the order of ± 20%.

Conclusions: The AI-based tool for quantification of SAT and muscle volumes showed high accuracy and reproducibility and provided a body composition analysis that is more relevant than manual analysis of a single CT slice.
View Article and Find Full Text PDF

Download full-text PDF

Source
http://dx.doi.org/10.1186/s41747-021-00210-8DOI Listing
http://www.ncbi.nlm.nih.gov/pmc/articles/PMC7947128PMC
March 2021

Auto-segmentations by convolutional neural network in cervical and anorectal cancer with clinical structure sets as the ground truth.

Clin Transl Radiat Oncol 2020 Nov 14;25:37-45. Epub 2020 Sep 14.

Wallenberg Centre for Molecular Medicine, Lund University, Lund, Sweden.

Background: It is time-consuming for oncologists to delineate volumes for radiotherapy treatment in computer tomography (CT) images. Automatic delineation based on image processing exists, but with varied accuracy and moderate time savings. Using convolutional neural network (CNN), delineations of volumes are faster and more accurate. We have used CTs with the annotated structure sets to train and evaluate a CNN.

Material And Methods: The CNN is a standard segmentation network modified to minimize memory usage. We used CTs and structure sets from 75 cervical cancers and 191 anorectal cancers receiving radiation therapy at Skåne University Hospital 2014-2018. Five structures were investigated: left/right femoral heads, bladder, bowel bag, and clinical target volume of lymph nodes (CTVNs). Dice score and mean surface distance (MSD) (mm) evaluated accuracy, and one oncologist qualitatively evaluated auto-segmentations.

Results: Median Dice/MSD scores for anorectal cancer: 0.91-0.92/1.93-1.86 femoral heads, 0.94/2.07 bladder, and 0.83/6.80 bowel bag. Median Dice scores for cervical cancer were 0.93-0.94/1.42-1.49 femoral heads, 0.84/3.51 bladder, 0.88/5.80 bowel bag, and 0.82/3.89 CTVNs. With qualitative evaluation, performance on femoral heads and bladder auto-segmentations was mostly excellent, but CTVN auto-segmentations were not acceptable to a larger extent.

Discussion: It is possible to train a CNN with high overlap using structure sets as ground truth. Manually delineated pelvic volumes from structure sets do not always strictly follow volume boundaries and are sometimes inaccurately defined, which leads to similar inaccuracies in the CNN output. More data that is consistently annotated is needed to achieve higher CNN accuracy and to enable future clinical implementation.
View Article and Find Full Text PDF

Download full-text PDF

Source
http://dx.doi.org/10.1016/j.ctro.2020.09.004DOI Listing
http://www.ncbi.nlm.nih.gov/pmc/articles/PMC7519211PMC
November 2020

Artificial intelligence-based detection of lymph node metastases by PET/CT predicts prostate cancer-specific survival.

Clin Physiol Funct Imaging 2021 Jan 18;41(1):62-67. Epub 2020 Oct 18.

Department of Clinical Physiology, Region Västra Götaland, Sahlgrenska University Hospital, Gothenburg, Sweden.

Introduction: Lymph node metastases are a key prognostic factor in prostate cancer (PCa), but detecting lymph node lesions from PET/CT images is a subjective process resulting in inter-reader variability. Artificial intelligence (AI)-based methods can provide an objective image analysis. We aimed at developing and validating an AI-based tool for detection of lymph node lesions.

Methods: A group of 399 patients with biopsy-proven PCa who had undergone F-choline PET/CT for staging prior to treatment were used to train (n = 319) and test (n = 80) the AI-based tool. The tool consisted of convolutional neural networks using complete PET/CT scans as inputs. In the test set, the AI-based lymph node detections were compared to those of two independent readers. The association with PCa-specific survival was investigated.

Results: The AI-based tool detected more lymph node lesions than Reader B (98 vs. 87/117; p = .045) using Reader A as reference. AI-based tool and Reader A showed similar performance (90 vs. 87/111; p = .63) using Reader B as reference. The number of lymph node lesions detected by the AI-based tool, PSA, and curative treatment was significantly associated with PCa-specific survival.

Conclusion: This study shows the feasibility of using an AI-based tool for automated and objective interpretation of PET/CT images that can provide assessments of lymph node lesions comparable with that of experienced readers and prognostic information in PCa patients.
View Article and Find Full Text PDF

Download full-text PDF

Source
http://dx.doi.org/10.1111/cpf.12666DOI Listing
January 2021

RECOMIA-a cloud-based platform for artificial intelligence research in nuclear medicine and radiology.

EJNMMI Phys 2020 Aug 4;7(1):51. Epub 2020 Aug 4.

Department of Clinical Physiology, Sahlgrenska University Hospital, Gothenburg, Sweden.

Background: Artificial intelligence (AI) is about to transform medical imaging. The Research Consortium for Medical Image Analysis (RECOMIA), a not-for-profit organisation, has developed an online platform to facilitate collaboration between medical researchers and AI researchers. The aim is to minimise the time and effort researchers need to spend on technical aspects, such as transfer, display, and annotation of images, as well as legal aspects, such as de-identification. The purpose of this article is to present the RECOMIA platform and its AI-based tools for organ segmentation in computed tomography (CT), which can be used for extraction of standardised uptake values from the corresponding positron emission tomography (PET) image.

Results: The RECOMIA platform includes modules for (1) local de-identification of medical images, (2) secure transfer of images to the cloud-based platform, (3) display functions available using a standard web browser, (4) tools for manual annotation of organs or pathology in the images, (5) deep learning-based tools for organ segmentation or other customised analyses, (6) tools for quantification of segmented volumes, and (7) an export function for the quantitative results. The AI-based tool for organ segmentation in CT currently handles 100 organs (77 bones and 23 soft tissue organs). The segmentation is based on two convolutional neural networks (CNNs): one network to handle organs with multiple similar instances, such as vertebrae and ribs, and one network for all other organs. The CNNs have been trained using CT studies from 339 patients. Experienced radiologists annotated organs in the CT studies. The performance of the segmentation tool, measured as mean Dice index on a manually annotated test set, with 10 representative organs, was 0.93 for all foreground voxels, and the mean Dice index over the organs were 0.86 (0.82 for the soft tissue organs and 0.90 for the bones).

Conclusion: The paper presents a platform that provides deep learning-based tools that can perform basic organ segmentations in CT, which can then be used to automatically obtain the different measurement in the corresponding PET image. The RECOMIA platform is available on request at www.recomia.org for research purposes.
View Article and Find Full Text PDF

Download full-text PDF

Source
http://dx.doi.org/10.1186/s40658-020-00316-9DOI Listing
http://www.ncbi.nlm.nih.gov/pmc/articles/PMC7403290PMC
August 2020

Deep learning-based quantification of PET/CT prostate gland uptake: association with overall survival.

Clin Physiol Funct Imaging 2020 Mar 20;40(2):106-113. Epub 2019 Dec 20.

Department of Clinical Physiology, Region Västra Götaland, Sahlgrenska University Hospital, Gothenburg, Sweden.

Aim: To validate a deep-learning (DL) algorithm for automated quantification of prostate cancer on positron emission tomography/computed tomography (PET/CT) and explore the potential of PET/CT measurements as prognostic biomarkers.

Material And Methods: Training of the DL-algorithm regarding prostate volume was performed on manually segmented CT images in 100 patients. Validation of the DL-algorithm was carried out in 45 patients with biopsy-proven hormone-naïve prostate cancer. The automated measurements of prostate volume were compared with manual measurements made independently by two observers. PET/CT measurements of tumour burden based on volume and SUV of abnormal voxels were calculated automatically. Voxels in the co-registered F-choline PET images above a standardized uptake value (SUV) of 2·65, and corresponding to the prostate as defined by the automated segmentation in the CT images, were defined as abnormal. Validation of abnormal voxels was performed by manual segmentation of radiotracer uptake. Agreement between algorithm and observers regarding prostate volume was analysed by Sørensen-Dice index (SDI). Associations between automatically based PET/CT biomarkers and age, prostate-specific antigen (PSA), Gleason score as well as overall survival were evaluated by a univariate Cox regression model.

Results: The SDI between the automated and the manual volume segmentations was 0·78 and 0·79, respectively. Automated PET/CT measures reflecting total lesion uptake and the relation between volume of abnormal voxels and total prostate volume were significantly associated with overall survival (P = 0·02), whereas age, PSA, and Gleason score were not.

Conclusion: Automated PET/CT biomarkers showed good agreement to manual measurements and were significantly associated with overall survival.
View Article and Find Full Text PDF

Download full-text PDF

Source
http://dx.doi.org/10.1111/cpf.12611DOI Listing
http://www.ncbi.nlm.nih.gov/pmc/articles/PMC7027436PMC
March 2020

Artificial intelligence-based versus manual assessment of prostate cancer in the prostate gland: a method comparison study.

Clin Physiol Funct Imaging 2019 Nov 8;39(6):399-406. Epub 2019 Sep 8.

Department of Clinical Research, University of Southern Denmark, Odense, Denmark.

Aim: To test the feasibility of a fully automated artificial intelligence-based method providing PET measures of prostate cancer (PCa).

Methods: A convolutional neural network (CNN) was trained for automated measurements in F-choline (FCH) PET/CT scans obtained prior to radical prostatectomy (RP) in 45 patients with newly diagnosed PCa. Automated values were obtained for prostate volume, maximal standardized uptake value (SUV ), mean standardized uptake value of voxels considered abnormal (SUV ) and volume of abnormal voxels (Vol ). The product SUV  × Vol was calculated to reflect total lesion uptake (TLU). Corresponding manual measurements were performed. CNN-estimated data were compared with the weighted surgically removed tissue specimens and manually derived data and related to clinical parameters assuming that 1 g ≈ 1 ml of tissue.

Results: The mean (range) weight of the prostate specimens was 44 g (20-109), while CNN-estimated volume was 62 ml (31-108) with a mean difference of 13·5 g or ml (95% CI: 9·78-17·32). The two measures were significantly correlated (r = 0·77, P<0·001). Mean differences (95% CI) between CNN-based and manually derived PET measures of SUVmax, SUVmean, Vol (ml) and TLU were 0·37 (-0·01 to 0·75), -0·08 (-0·30 to 0·14), 1·40 (-2·26 to 5·06) and 9·61 (-3·95 to 23·17), respectively. PET findings Vol and TLU correlated with PSA (P<0·05), but not with Gleason score or stage.

Conclusion: Automated CNN segmentation provided in seconds volume and simple PET measures similar to manually derived ones. Further studies on automated CNN segmentation with newer tracers such as radiolabelled prostate-specific membrane antigen are warranted.
View Article and Find Full Text PDF

Download full-text PDF

Source
http://dx.doi.org/10.1111/cpf.12592DOI Listing
November 2019

Deep learning for segmentation of 49 selected bones in CT scans: First step in automated PET/CT-based 3D quantification of skeletal metastases.

Eur J Radiol 2019 Apr 1;113:89-95. Epub 2019 Feb 1.

Department of Translational Medicine, Lund University, Malmö, Sweden; Wallenberg Center for Molecular Medicine, Lund University, Malmö, Sweden.

Purpose: The aim of this study was to develop a deep learning-based method for segmentation of bones in CT scans and test its accuracy compared to manual delineation, as a first step in the creation of an automated PET/CT-based method for quantifying skeletal tumour burden.

Methods: Convolutional neural networks (CNNs) were trained to segment 49 bones using manual segmentations from 100 CT scans. After training, the CNN-based segmentation method was tested on 46 patients with prostate cancer, who had undergone F-choline-PET/CT and F-NaF PET/CT less than three weeks apart. Bone volumes were calculated from the segmentations. The network's performance was compared with manual segmentations of five bones made by an experienced physician. Accuracy of the spatial overlap between automated CNN-based and manual segmentations of these five bones was assessed using the Sørensen-Dice index (SDI). Reproducibility was evaluated applying the Bland-Altman method.

Results: The median (SD) volumes of the five selected bones were by CNN and manual segmentation: Th7 41 (3.8) and 36 (5.1), L3 76 (13) and 75 (9.2), sacrum 284 (40) and 283 (26), 7th rib 33 (3.9) and 31 (4.8), sternum 80 (11) and 72 (9.2), respectively. Median SDIs were 0.86 (Th7), 0.85 (L3), 0.88 (sacrum), 0.84 (7th rib) and 0.83 (sternum). The intraobserver volume difference was less with CNN-based than manual approach: Th7 2% and 14%, L3 7% and 8%, sacrum 1% and 3%, 7th rib 1% and 6%, sternum 3% and 5%, respectively. The average volume difference measured as ratio volume difference/mean volume between the two CNN-based segmentations was 5-6% for the vertebral column and ribs and ≤3% for other bones.

Conclusion: The new deep learning-based method for automated segmentation of bones in CT scans provided highly accurate bone volumes in a fast and automated way and, thus, appears to be a valuable first step in the development of a clinical useful processing procedure providing reliable skeletal segmentation as a key part of quantification of skeletal metastases.
View Article and Find Full Text PDF

Download full-text PDF

Source
http://dx.doi.org/10.1016/j.ejrad.2019.01.028DOI Listing
April 2019

Automated quantification of reference levels in liver and mediastinal blood pool for the Deauville therapy response classification using FDG-PET/CT in Hodgkin and non-Hodgkin lymphomas.

Clin Physiol Funct Imaging 2019 Jan 3;39(1):78-84. Epub 2018 Oct 3.

Department of Clinical Physiology and Nuclear Medicine, Lund University and Skåne University Hospital, Malmö, Sweden.

Background: 18F-FDG-PET/CT has become a standard for assessing treatment response in patients with lymphoma. A subjective interpretation of the scan based on the Deauville 5-point scale has been widely adopted. However, inter-observer variability due to the subjectivity of the interpretation is a limitation. Our main goal is to develop an objective and automated method for evaluating response. The first step is to develop and validate an artificial intelligence (AI)-based method, for the automated quantification of reference levels in the liver and mediastinal blood pool in patients with lymphoma.

Methods: The AI-based method was trained to segment the liver and the mediastinal blood pool in CT images from 80 lymphoma patients, who had undergone 18F-FDG-PET/CT, and apply this to a validation group of six lymphoma patients. CT segmentations were transferred to the PET images to obtain automatic standardized uptake values (SUV). The AI-based analysis was compared to corresponding manual segmentations performed by two radiologists.

Results: The mean difference for the comparison between the AI-based liver SUV quantifications and those of the two radiologists in the validation group was 0·02 and 0·02, respectively, and 0·02 and 0·02 for mediastinal blood pool respectively.

Conclusions: An AI-based method for the automated quantification of reference levels in the liver and mediastinal blood pool shows good agreement with results obtained by experienced radiologists who had manually segmented the CT images. This is a first, promising step towards objective treatment response evaluation in patients with lymphoma based on 18F-FDG-PET/CT.
View Article and Find Full Text PDF

Download full-text PDF

Source
http://dx.doi.org/10.1111/cpf.12546DOI Listing
January 2019

Shortest Paths with Higher-Order Regularization.

IEEE Trans Pattern Anal Mach Intell 2015 Dec;37(12):2588-600

This paper describes a new method of finding thin, elongated structures in images and volumes. We use shortest paths to minimize very general functionals of higher-order curve properties, such as curvature and torsion. Our method uses line graphs to find the optimal path on a given discretization, often in the order of seconds on a single computer. The curves are then refined using local optimization making it possible to recover very smooth curves. We are able to place constraints on our curves such as maximum integrated curvature, or a maximum curvature at any point of the curve. To our knowledge, we are the first to perform experiments in three dimensions with curvature and torsion regularization. The largest graphs we process have over a hundred billion arcs. Experiments on medical images and in multi-view reconstruction show the significance and practical usefulness of higher order regularization.
View Article and Find Full Text PDF

Download full-text PDF

Source
http://dx.doi.org/10.1109/TPAMI.2015.2409869DOI Listing
December 2015

Exploratory study of EEG burst characteristics in preterm infants.

Annu Int Conf IEEE Eng Med Biol Soc 2013 ;2013:4295-8

In this paper, we study machine learning techniques and features of electroencephalography activity bursts for predicting outcome in extremely preterm infants. It was previously shown that the distribution of interburst interval durations predicts clinical outcome, but in previous work the information within the bursts has been neglected. In this paper, we perform exploratory analysis of feature extraction of burst characteristics and use machine learning techniques to show that such features could be used for outcome prediction. The results are promising, but further verification in larger datasets is needed to obtain conclusive results.
View Article and Find Full Text PDF

Download full-text PDF

Source
http://dx.doi.org/10.1109/EMBC.2013.6610495DOI Listing
July 2015

An efficient optimization framework for multi-region segmentation based on Lagrangian duality.

IEEE Trans Med Imaging 2013 Feb 10;32(2):178-88. Epub 2012 Sep 10.

Centre for Mathematical Sciences, Lund University, Lund, Sweden.

We introduce a multi-region model for simultaneous segmentation of medical images. In contrast to many other models, geometric constraints such as inclusion and exclusion between the regions are enforced, which makes it possible to correctly segment different regions even if the intensity distributions are identical. We efficiently optimize the model using a combination of graph cuts and Lagrangian duality which is faster and more memory efficient than current state of the art. As the method is based on global optimization techniques, the resulting segmentations are independent of initialization. We apply our framework to the segmentation of the left and right ventricles, myocardium and the left ventricular papillary muscles in magnetic resonance imaging and to lung segmentation in full-body X-ray computed tomography. We evaluate our approach on a publicly available benchmark with competitive results.
View Article and Find Full Text PDF

Download full-text PDF

Source
http://dx.doi.org/10.1109/TMI.2012.2218117DOI Listing
February 2013
-->