Publications by authors named "Olof Enqvist"

22 Publications

  • Page 1 of 1

Convolutional neural network-based automatic heart segmentation and quantitation in I-metaiodobenzylguanidine SPECT imaging.

EJNMMI Res 2021 Oct 12;11(1):105. Epub 2021 Oct 12.

Department of Nuclear Medicine, Kanazawa University, Kanazawa, Japan.

Background: Since three-dimensional segmentation of cardiac region in I-metaiodobenzylguanidine (MIBG) study has not been established, this study aimed to achieve organ segmentation using a convolutional neural network (CNN) with I-MIBG single photon emission computed tomography (SPECT) imaging, to calculate heart counts and washout rates (WR) automatically and to compare with conventional quantitation based on planar imaging.

Methods: We assessed 48 patients (aged 68.4 ± 11.7 years) with heart and neurological diseases, including chronic heart failure, dementia with Lewy bodies, and Parkinson's disease. All patients were assessed by early and late I-MIBG planar and SPECT imaging. The CNN was initially trained to individually segment the lungs and liver on early and late SPECT images. The segmentation masks were aligned, and then, the CNN was trained to directly segment the heart, and all models were evaluated using fourfold cross-validation. The CNN-based average heart counts and WR were calculated and compared with those determined using planar parameters. The CNN-based SPECT and conventional planar heart counts were corrected by physical time decay, injected dose of I-MIBG, and body weight. We also divided WR into normal and abnormal groups from linear regression lines determined by the relationship between planar WR and CNN-based WR and then analyzed agreement between them.

Results: The CNN segmented the cardiac region in patients with normal and reduced uptake. The CNN-based SPECT heart counts significantly correlated with conventional planar heart counts with and without background correction and a planar heart-to-mediastinum ratio (R = 0.862, 0.827, and 0.729, p < 0.0001, respectively). The CNN-based and planar WRs also correlated with and without background correction and WR based on heart-to-mediastinum ratios of R = 0.584, 0.568 and 0.507, respectively (p < 0.0001). Contingency table findings of high and low WR (cutoffs: 34% and 30% for planar and SPECT studies, respectively) showed 87.2% agreement between CNN-based and planar methods.

Conclusions: The CNN could create segmentation from SPECT images, and average heart counts and WR were reliably calculated three-dimensionally, which might be a novel approach to quantifying SPECT images of innervation.
View Article and Find Full Text PDF

Download full-text PDF

Source
http://dx.doi.org/10.1186/s13550-021-00847-xDOI Listing
October 2021

Deep learning takes the pain out of back breaking work - Automatic vertebral segmentation and attenuation measurement for osteoporosis.

Clin Imaging 2021 Aug 26;81:54-59. Epub 2021 Aug 26.

Göteborg University, SU Sahlgrenska, 413 45 Göteborg, Sweden.

Background: Osteoporosis is an underdiagnosed and undertreated disease worldwide. Recent studies have highlighted the use of simple vertebral trabecular attenuation values for opportunistic osteoporosis screening. Meanwhile, machine learning has been used to accurately segment large parts of the human skeleton.

Purpose: To evaluate a fully automated deep learning-based method for lumbar vertebral segmentation and measurement of vertebral volumetric trabecular attenuation values.

Material And Methods: A deep learning-based method for automated segmentation of bones was retrospectively applied to non-contrast CT scans of 1008 patients (mean age 57 years, 472 female, 536 male). Each vertebral segmentation was automatically reduced by 7 mm in all directions in order to avoid cortical bone. The mean and median volumetric attenuation values from Th12 to L4 were obtained and plotted against patient age and sex. L1 values were further analyzed to facilitate comparison with previous studies.

Results: The mean L1 attenuation values decreased linearly with age by -2.2 HU per year (age > 30, 95% CI: -2.4, -2.0, R = 0.3544). The mean L1 attenuation value of the entire population cohort was 140 HU ± 54.

Conclusions: With results closely matching those of previous studies, we believe that our fully automated deep learning-based method can be used to obtain lumbar volumetric trabecular attenuation values which can be used for opportunistic screening of osteoporosis in patients undergoing CT scans for other reasons.
View Article and Find Full Text PDF

Download full-text PDF

Source
http://dx.doi.org/10.1016/j.clinimag.2021.08.009DOI Listing
August 2021

Artificial intelligence-based measurements of PET/CT imaging biomarkers are associated with disease-specific survival of high-risk prostate cancer patients.

Scand J Urol 2021 Sep 25:1-7. Epub 2021 Sep 25.

Department of Clinical Physiology, Region Västra Götaland, Sahlgrenska University Hospital, Gothenburg, Sweden.

Objective: Artificial intelligence (AI) offers new opportunities for objective quantitative measurements of imaging biomarkers from positron-emission tomography/computed tomography (PET/CT). Clinical image reporting relies predominantly on observer-dependent visual assessment and easily accessible measures like SUV, representing lesion uptake in a relatively small amount of tissue. Our hypothesis is that measurements of total volume and lesion uptake of the entire tumour would better reflect the disease`s activity with prognostic significance, compared with conventional measurements.

Methods: An AI-based algorithm was trained to automatically measure the prostate and its tumour content in PET/CT of 145 patients. The algorithm was then tested retrospectively on 285 high-risk patients, who were examined using F-choline PET/CT for primary staging between April 2008 and July 2015. Prostate tumour volume, tumour fraction of the prostate gland, lesion uptake of the entire tumour, and SUV were obtained automatically. Associations between these measurements, age, PSA, Gleason score and prostate cancer-specific survival were studied, using a Cox proportional-hazards regression model.

Results: Twenty-three patients died of prostate cancer during follow-up (median survival 3.8 years). Total tumour volume of the prostate ( = 0.008), tumour fraction of the gland ( = 0.005), total lesion uptake of the prostate ( = 0.02), and age ( = 0.01) were significantly associated with disease-specific survival, whereas SUV ( = 0.2), PSA ( = 0.2), and Gleason score ( = 0.8) were not.

Conclusion: AI-based assessments of total tumour volume and lesion uptake were significantly associated with disease-specific survival in this patient cohort, whereas SUV and Gleason scores were not. The AI-based approach appears well-suited for clinically relevant patient stratification and monitoring of individual therapy.
View Article and Find Full Text PDF

Download full-text PDF

Source
http://dx.doi.org/10.1080/21681805.2021.1977845DOI Listing
September 2021

"Global" cardiac atherosclerotic burden assessed by artificial intelligence-based versus manual segmentation in F-sodium fluoride PET/CT scans: Head-to-head comparison.

J Nucl Cardiol 2021 Aug 12. Epub 2021 Aug 12.

Department of Nuclear Medicine, Odense University Hospital, 5000, Odense C, Denmark.

Background: Artificial intelligence (AI) is known to provide effective means to accelerate and facilitate clinical and research processes. So in this study it was aimed to compare a AI-based method for cardiac segmentation in positron emission tomography/computed tomography (PET/CT) scans with manual segmentation to assess global cardiac atherosclerosis burden.

Methods: A trained convolutional neural network (CNN) was used for cardiac segmentation in F-sodium fluoride PET/CT scans of 29 healthy volunteers and 20 angina pectoris patients and compared with manual segmentation. Parameters for segmented volume (Vol) and mean, maximal, and total standardized uptake values (SUVmean, SUVmax, SUVtotal) were analyzed by Bland-Altman Limits of Agreement. Repeatability with AI-based assessment of the same scans is 100%. Repeatability (same conditions, same operator) and reproducibility (same conditions, two different operators) of manual segmentation was examined by re-segmentation in 25 randomly selected scans.

Results: Mean (± SD) values with manual vs. CNN-based segmentation were Vol 617.65 ± 154.99 mL vs 625.26 ± 153.55 mL (P = .21), SUVmean 0.69 ± 0.15 vs 0.69 ± 0.15 (P = .26), SUVmax 2.68 ± 0.86 vs 2.77 ± 1.05 (P = .34), and SUVtotal 425.51 ± 138.93 vs 427.91 ± 132.68 (P = .62). Limits of agreement were - 89.42 to 74.2, - 0.02 to 0.02, - 1.52 to 1.32, and - 68.02 to 63.21, respectively. Manual segmentation lasted typically 30 minutes vs about one minute with the CNN-based approach. The maximal deviation at manual re-segmentation was for the four parameters 0% to 0.5% with the same and 0% to 1% with different operators.

Conclusion: The CNN-based method was faster and provided values for Vol, SUVmean, SUVmax, and SUVtotal comparable to the manually obtained ones. This AI-based segmentation approach appears to offer a more reproducible and much faster substitute for slow and cumbersome manual segmentation of the heart.
View Article and Find Full Text PDF

Download full-text PDF

Source
http://dx.doi.org/10.1007/s12350-021-02758-9DOI Listing
August 2021

Artificial intelligence could alert for focal skeleton/bone marrow uptake in Hodgkin's lymphoma patients staged with FDG-PET/CT.

Sci Rep 2021 05 17;11(1):10382. Epub 2021 May 17.

Clinical Physiology and Nuclear Medicine, Skåne University Hospital, Malmö, Sweden.

To develop an artificial intelligence (AI)-based method for the detection of focal skeleton/bone marrow uptake (BMU) in patients with Hodgkin's lymphoma (HL) undergoing staging with FDG-PET/CT. The results of the AI in a separate test group were compared to the interpretations of independent physicians. The skeleton and bone marrow were segmented using a convolutional neural network. The training of AI was based on 153 un-treated patients. Bone uptake significantly higher than the mean BMU was marked as abnormal, and an index, based on the total squared abnormal uptake, was computed to identify the focal uptake. Patients with an index above a predefined threshold were interpreted as having focal uptake. As the test group, 48 un-treated patients who had undergone a staging FDG-PET/CT between 2017-2018 with biopsy-proven HL were retrospectively included. Ten physicians classified the 48 cases regarding focal skeleton/BMU. The majority of the physicians agreed with the AI in 39/48 cases (81%) regarding focal skeleton/bone marrow involvement. Inter-observer agreement between the physicians was moderate, Kappa 0.51 (range 0.25-0.80). An AI-based method can be developed to highlight suspicious focal skeleton/BMU in HL patients staged with FDG-PET/CT. Inter-observer agreement regarding focal BMU is moderate among nuclear medicine physicians.
View Article and Find Full Text PDF

Download full-text PDF

Source
http://dx.doi.org/10.1038/s41598-021-89656-9DOI Listing
http://www.ncbi.nlm.nih.gov/pmc/articles/PMC8128858PMC
May 2021

Aortic wall segmentation in F-sodium fluoride PET/CT scans: Head-to-head comparison of artificial intelligence-based versus manual segmentation.

J Nucl Cardiol 2021 May 12. Epub 2021 May 12.

Department of Nuclear Medicine, Odense University Hospital, 5000, Odense, Denmark.

Background: We aimed to establish and test an automated AI-based method for rapid segmentation of the aortic wall in positron emission tomography/computed tomography (PET/CT) scans.

Methods: For segmentation of the wall in three sections: the arch, thoracic, and abdominal aorta, we developed a tool based on a convolutional neural network (CNN), available on the Research Consortium for Medical Image Analysis (RECOMIA) platform, capable of segmenting 100 different labels in CT images. It was tested on F-sodium fluoride PET/CT scans of 49 subjects (29 healthy controls and 20 angina pectoris patients) and compared to data obtained by manual segmentation. The following derived parameters were compared using Bland-Altman Limits of Agreement: segmented volume, and maximal, mean, and total standardized uptake values (SUVmax, SUVmean, SUVtotal). The repeatability of the manual method was examined in 25 randomly selected scans.

Results: CNN-derived values for volume, SUVmax, and SUVtotal were all slightly, i.e., 13-17%, lower than the corresponding manually obtained ones, whereas SUVmean values for the three aortic sections were virtually identical for the two methods. Manual segmentation lasted typically 1-2 hours per scan compared to about one minute with the CNN-based approach. The maximal deviation at repeat manual segmentation was 6%.

Conclusions: The automated CNN-based approach was much faster and provided parameters that were about 15% lower than the manually obtained values, except for SUVmean values, which were comparable. AI-based segmentation of the aorta already now appears as a trustworthy and fast alternative to slow and cumbersome manual segmentation.
View Article and Find Full Text PDF

Download full-text PDF

Source
http://dx.doi.org/10.1007/s12350-021-02649-zDOI Listing
May 2021

AI-based detection of lung lesions in [F]FDG PET-CT from lung cancer patients.

EJNMMI Phys 2021 Mar 25;8(1):32. Epub 2021 Mar 25.

Department of Clinical Physiology, Sahlgrenska University Hospital, Gothenburg, Sweden.

Background: [F]-fluorodeoxyglucose (FDG) positron emission tomography with computed tomography (PET-CT) is a well-established modality in the work-up of patients with suspected or confirmed diagnosis of lung cancer. Recent research efforts have focused on extracting theragnostic and textural information from manually indicated lung lesions. Both semi-automatic and fully automatic use of artificial intelligence (AI) to localise and classify FDG-avid foci has been demonstrated. To fully harness AI's usefulness, we have developed a method which both automatically detects abnormal lung lesions and calculates the total lesion glycolysis (TLG) on FDG PET-CT.

Methods: One hundred twelve patients (59 females and 53 males) who underwent FDG PET-CT due to suspected or for the management of known lung cancer were studied retrospectively. These patients were divided into a training group (59%; n = 66), a validation group (20.5%; n = 23) and a test group (20.5%; n = 23). A nuclear medicine physician manually segmented abnormal lung lesions with increased FDG-uptake in all PET-CT studies. The AI-based method was trained to segment the lesions based on the manual segmentations. TLG was then calculated from manual and AI-based measurements, respectively and analysed with Bland-Altman plots.

Results: The AI-tool's performance in detecting lesions had a sensitivity of 90%. One small lesion was missed in two patients, respectively, where both had a larger lesion which was correctly detected. The positive and negative predictive values were 88% and 100%, respectively. The correlation between manual and AI TLG measurements was strong (R = 0.74). Bias was 42 g and 95% limits of agreement ranged from - 736 to 819 g. Agreement was particularly high in smaller lesions.

Conclusions: The AI-based method is suitable for the detection of lung lesions and automatic calculation of TLG in small- to medium-sized tumours. In a clinical setting, it will have an added value due to its capability to sort out negative examinations resulting in prioritised and focused care on patients with potentially malignant lesions.
View Article and Find Full Text PDF

Download full-text PDF

Source
http://dx.doi.org/10.1186/s40658-021-00376-5DOI Listing
http://www.ncbi.nlm.nih.gov/pmc/articles/PMC7994489PMC
March 2021

Artificial intelligence-aided CT segmentation for body composition analysis: a validation study.

Eur Radiol Exp 2021 Mar 11;5(1):11. Epub 2021 Mar 11.

Region Västra Götaland, Department of Clinical Physiology, Sahlgrenska University Hospital, Gothenburg, Sweden.

Background: Body composition is associated with survival outcome in oncological patients, but it is not routinely calculated. Manual segmentation of subcutaneous adipose tissue (SAT) and muscle is time-consuming and therefore limited to a single CT slice. Our goal was to develop an artificial-intelligence (AI)-based method for automated quantification of three-dimensional SAT and muscle volumes from CT images.

Methods: Ethical approvals from Gothenburg and Lund Universities were obtained. Convolutional neural networks were trained to segment SAT and muscle using manual segmentations on CT images from a training group of 50 patients. The method was applied to a separate test group of 74 cancer patients, who had two CT studies each with a median interval between the studies of 3 days. Manual segmentations in a single CT slice were used for comparison. The accuracy was measured as overlap between the automated and manual segmentations.

Results: The accuracy of the AI method was 0.96 for SAT and 0.94 for muscle. The average differences in volumes were significantly lower than the corresponding differences in areas in a single CT slice: 1.8% versus 5.0% (p < 0.001) for SAT and 1.9% versus 3.9% (p < 0.001) for muscle. The 95% confidence intervals for predicted volumes in an individual subject from the corresponding single CT slice areas were in the order of ± 20%.

Conclusions: The AI-based tool for quantification of SAT and muscle volumes showed high accuracy and reproducibility and provided a body composition analysis that is more relevant than manual analysis of a single CT slice.
View Article and Find Full Text PDF

Download full-text PDF

Source
http://dx.doi.org/10.1186/s41747-021-00210-8DOI Listing
http://www.ncbi.nlm.nih.gov/pmc/articles/PMC7947128PMC
March 2021

Auto-segmentations by convolutional neural network in cervical and anorectal cancer with clinical structure sets as the ground truth.

Clin Transl Radiat Oncol 2020 Nov 14;25:37-45. Epub 2020 Sep 14.

Wallenberg Centre for Molecular Medicine, Lund University, Lund, Sweden.

Background: It is time-consuming for oncologists to delineate volumes for radiotherapy treatment in computer tomography (CT) images. Automatic delineation based on image processing exists, but with varied accuracy and moderate time savings. Using convolutional neural network (CNN), delineations of volumes are faster and more accurate. We have used CTs with the annotated structure sets to train and evaluate a CNN.

Material And Methods: The CNN is a standard segmentation network modified to minimize memory usage. We used CTs and structure sets from 75 cervical cancers and 191 anorectal cancers receiving radiation therapy at Skåne University Hospital 2014-2018. Five structures were investigated: left/right femoral heads, bladder, bowel bag, and clinical target volume of lymph nodes (CTVNs). Dice score and mean surface distance (MSD) (mm) evaluated accuracy, and one oncologist qualitatively evaluated auto-segmentations.

Results: Median Dice/MSD scores for anorectal cancer: 0.91-0.92/1.93-1.86 femoral heads, 0.94/2.07 bladder, and 0.83/6.80 bowel bag. Median Dice scores for cervical cancer were 0.93-0.94/1.42-1.49 femoral heads, 0.84/3.51 bladder, 0.88/5.80 bowel bag, and 0.82/3.89 CTVNs. With qualitative evaluation, performance on femoral heads and bladder auto-segmentations was mostly excellent, but CTVN auto-segmentations were not acceptable to a larger extent.

Discussion: It is possible to train a CNN with high overlap using structure sets as ground truth. Manually delineated pelvic volumes from structure sets do not always strictly follow volume boundaries and are sometimes inaccurately defined, which leads to similar inaccuracies in the CNN output. More data that is consistently annotated is needed to achieve higher CNN accuracy and to enable future clinical implementation.
View Article and Find Full Text PDF

Download full-text PDF

Source
http://dx.doi.org/10.1016/j.ctro.2020.09.004DOI Listing
http://www.ncbi.nlm.nih.gov/pmc/articles/PMC7519211PMC
November 2020

Artificial intelligence-based detection of lymph node metastases by PET/CT predicts prostate cancer-specific survival.

Clin Physiol Funct Imaging 2021 Jan 18;41(1):62-67. Epub 2020 Oct 18.

Department of Clinical Physiology, Region Västra Götaland, Sahlgrenska University Hospital, Gothenburg, Sweden.

Introduction: Lymph node metastases are a key prognostic factor in prostate cancer (PCa), but detecting lymph node lesions from PET/CT images is a subjective process resulting in inter-reader variability. Artificial intelligence (AI)-based methods can provide an objective image analysis. We aimed at developing and validating an AI-based tool for detection of lymph node lesions.

Methods: A group of 399 patients with biopsy-proven PCa who had undergone F-choline PET/CT for staging prior to treatment were used to train (n = 319) and test (n = 80) the AI-based tool. The tool consisted of convolutional neural networks using complete PET/CT scans as inputs. In the test set, the AI-based lymph node detections were compared to those of two independent readers. The association with PCa-specific survival was investigated.

Results: The AI-based tool detected more lymph node lesions than Reader B (98 vs. 87/117; p = .045) using Reader A as reference. AI-based tool and Reader A showed similar performance (90 vs. 87/111; p = .63) using Reader B as reference. The number of lymph node lesions detected by the AI-based tool, PSA, and curative treatment was significantly associated with PCa-specific survival.

Conclusion: This study shows the feasibility of using an AI-based tool for automated and objective interpretation of PET/CT images that can provide assessments of lymph node lesions comparable with that of experienced readers and prognostic information in PCa patients.
View Article and Find Full Text PDF

Download full-text PDF

Source
http://dx.doi.org/10.1111/cpf.12666DOI Listing
January 2021

RECOMIA-a cloud-based platform for artificial intelligence research in nuclear medicine and radiology.

EJNMMI Phys 2020 Aug 4;7(1):51. Epub 2020 Aug 4.

Department of Clinical Physiology, Sahlgrenska University Hospital, Gothenburg, Sweden.

Background: Artificial intelligence (AI) is about to transform medical imaging. The Research Consortium for Medical Image Analysis (RECOMIA), a not-for-profit organisation, has developed an online platform to facilitate collaboration between medical researchers and AI researchers. The aim is to minimise the time and effort researchers need to spend on technical aspects, such as transfer, display, and annotation of images, as well as legal aspects, such as de-identification. The purpose of this article is to present the RECOMIA platform and its AI-based tools for organ segmentation in computed tomography (CT), which can be used for extraction of standardised uptake values from the corresponding positron emission tomography (PET) image.

Results: The RECOMIA platform includes modules for (1) local de-identification of medical images, (2) secure transfer of images to the cloud-based platform, (3) display functions available using a standard web browser, (4) tools for manual annotation of organs or pathology in the images, (5) deep learning-based tools for organ segmentation or other customised analyses, (6) tools for quantification of segmented volumes, and (7) an export function for the quantitative results. The AI-based tool for organ segmentation in CT currently handles 100 organs (77 bones and 23 soft tissue organs). The segmentation is based on two convolutional neural networks (CNNs): one network to handle organs with multiple similar instances, such as vertebrae and ribs, and one network for all other organs. The CNNs have been trained using CT studies from 339 patients. Experienced radiologists annotated organs in the CT studies. The performance of the segmentation tool, measured as mean Dice index on a manually annotated test set, with 10 representative organs, was 0.93 for all foreground voxels, and the mean Dice index over the organs were 0.86 (0.82 for the soft tissue organs and 0.90 for the bones).

Conclusion: The paper presents a platform that provides deep learning-based tools that can perform basic organ segmentations in CT, which can then be used to automatically obtain the different measurement in the corresponding PET image. The RECOMIA platform is available on request at www.recomia.org for research purposes.
View Article and Find Full Text PDF

Download full-text PDF

Source
http://dx.doi.org/10.1186/s40658-020-00316-9DOI Listing
http://www.ncbi.nlm.nih.gov/pmc/articles/PMC7403290PMC
August 2020

Deep learning-based quantification of PET/CT prostate gland uptake: association with overall survival.

Clin Physiol Funct Imaging 2020 Mar 20;40(2):106-113. Epub 2019 Dec 20.

Department of Clinical Physiology, Region Västra Götaland, Sahlgrenska University Hospital, Gothenburg, Sweden.

Aim: To validate a deep-learning (DL) algorithm for automated quantification of prostate cancer on positron emission tomography/computed tomography (PET/CT) and explore the potential of PET/CT measurements as prognostic biomarkers.

Material And Methods: Training of the DL-algorithm regarding prostate volume was performed on manually segmented CT images in 100 patients. Validation of the DL-algorithm was carried out in 45 patients with biopsy-proven hormone-naïve prostate cancer. The automated measurements of prostate volume were compared with manual measurements made independently by two observers. PET/CT measurements of tumour burden based on volume and SUV of abnormal voxels were calculated automatically. Voxels in the co-registered F-choline PET images above a standardized uptake value (SUV) of 2·65, and corresponding to the prostate as defined by the automated segmentation in the CT images, were defined as abnormal. Validation of abnormal voxels was performed by manual segmentation of radiotracer uptake. Agreement between algorithm and observers regarding prostate volume was analysed by Sørensen-Dice index (SDI). Associations between automatically based PET/CT biomarkers and age, prostate-specific antigen (PSA), Gleason score as well as overall survival were evaluated by a univariate Cox regression model.

Results: The SDI between the automated and the manual volume segmentations was 0·78 and 0·79, respectively. Automated PET/CT measures reflecting total lesion uptake and the relation between volume of abnormal voxels and total prostate volume were significantly associated with overall survival (P = 0·02), whereas age, PSA, and Gleason score were not.

Conclusion: Automated PET/CT biomarkers showed good agreement to manual measurements and were significantly associated with overall survival.
View Article and Find Full Text PDF

Download full-text PDF

Source
http://dx.doi.org/10.1111/cpf.12611DOI Listing
http://www.ncbi.nlm.nih.gov/pmc/articles/PMC7027436PMC
March 2020

Artificial intelligence-based versus manual assessment of prostate cancer in the prostate gland: a method comparison study.

Clin Physiol Funct Imaging 2019 Nov 8;39(6):399-406. Epub 2019 Sep 8.

Department of Clinical Research, University of Southern Denmark, Odense, Denmark.

Aim: To test the feasibility of a fully automated artificial intelligence-based method providing PET measures of prostate cancer (PCa).

Methods: A convolutional neural network (CNN) was trained for automated measurements in F-choline (FCH) PET/CT scans obtained prior to radical prostatectomy (RP) in 45 patients with newly diagnosed PCa. Automated values were obtained for prostate volume, maximal standardized uptake value (SUV ), mean standardized uptake value of voxels considered abnormal (SUV ) and volume of abnormal voxels (Vol ). The product SUV  × Vol was calculated to reflect total lesion uptake (TLU). Corresponding manual measurements were performed. CNN-estimated data were compared with the weighted surgically removed tissue specimens and manually derived data and related to clinical parameters assuming that 1 g ≈ 1 ml of tissue.

Results: The mean (range) weight of the prostate specimens was 44 g (20-109), while CNN-estimated volume was 62 ml (31-108) with a mean difference of 13·5 g or ml (95% CI: 9·78-17·32). The two measures were significantly correlated (r = 0·77, P<0·001). Mean differences (95% CI) between CNN-based and manually derived PET measures of SUVmax, SUVmean, Vol (ml) and TLU were 0·37 (-0·01 to 0·75), -0·08 (-0·30 to 0·14), 1·40 (-2·26 to 5·06) and 9·61 (-3·95 to 23·17), respectively. PET findings Vol and TLU correlated with PSA (P<0·05), but not with Gleason score or stage.

Conclusion: Automated CNN segmentation provided in seconds volume and simple PET measures similar to manually derived ones. Further studies on automated CNN segmentation with newer tracers such as radiolabelled prostate-specific membrane antigen are warranted.
View Article and Find Full Text PDF

Download full-text PDF

Source
http://dx.doi.org/10.1111/cpf.12592DOI Listing
November 2019

Denoising of Scintillation Camera Images Using a Deep Convolutional Neural Network: A Monte Carlo Simulation Approach.

J Nucl Med 2020 02 19;61(2):298-303. Epub 2019 Jul 19.

Clinical Physiology and Nuclear Medicine, Skåne University Hospital, Malmö, Sweden; and.

Scintillation camera images contain a large amount of Poisson noise. We have investigated whether noise can be removed in whole-body bone scans using convolutional neural networks (CNNs) trained with sets of noisy and noiseless images obtained by Monte Carlo simulation. : Three CNNs were generated using 3 different sets of training images: simulated bone scan images, images of a cylindric phantom with hot and cold spots, and a mix of the first two. Each training set consisted of 40,000 noiseless and noisy image pairs. The CNNs were evaluated with simulated images of a cylindric phantom and simulated bone scan images. The mean squared error between filtered and true images was used as difference metric, and the coefficient of variation was used to estimate noise reduction. The CNNs were compared with gaussian and median filters. A clinical evaluation was performed in which the ability to detect metastases for CNN- and gaussian-filtered bone scans with half the number of counts was compared with standard bone scans. : The best CNN reduced the coefficient of variation by, on average, 92%, and the best standard filter reduced the coefficient of variation by 88%. The best CNN gave a mean squared error that was on average 68% and 20% better than the best standard filters, for the cylindric and bone scan images, respectively. The best CNNs for the cylindric phantom and bone scans were the dedicated CNNs. No significant differences in the ability to detect metastases were found between standard, CNN-, and gaussian-filtered bone scans. Noise can be removed efficiently regardless of noise level with little or no resolution loss. The CNN filter enables reducing the scanning time by half and still obtaining good accuracy for bone metastasis assessment.
View Article and Find Full Text PDF

Download full-text PDF

Source
http://dx.doi.org/10.2967/jnumed.119.226613DOI Listing
February 2020

Correction to: 3D skeletal uptake of F sodium fluoride in PET/CT images is associated with overall survival in patients with prostate cancer.

EJNMMI Res 2019 05 20;9(1):44. Epub 2019 May 20.

Department of Translational Medicine, Lund University, Malmö, Sweden.

Following publication of the original article [1], the authors flagged the that the Kaplan-Meier curve in Fig. 6 is a duplication of the Kaplan-Meier curve in Fig. 5, which is not correct.
View Article and Find Full Text PDF

Download full-text PDF

Source
http://dx.doi.org/10.1186/s13550-019-0510-0DOI Listing
http://www.ncbi.nlm.nih.gov/pmc/articles/PMC6527652PMC
May 2019

Deep learning for segmentation of 49 selected bones in CT scans: First step in automated PET/CT-based 3D quantification of skeletal metastases.

Eur J Radiol 2019 Apr 1;113:89-95. Epub 2019 Feb 1.

Department of Translational Medicine, Lund University, Malmö, Sweden; Wallenberg Center for Molecular Medicine, Lund University, Malmö, Sweden.

Purpose: The aim of this study was to develop a deep learning-based method for segmentation of bones in CT scans and test its accuracy compared to manual delineation, as a first step in the creation of an automated PET/CT-based method for quantifying skeletal tumour burden.

Methods: Convolutional neural networks (CNNs) were trained to segment 49 bones using manual segmentations from 100 CT scans. After training, the CNN-based segmentation method was tested on 46 patients with prostate cancer, who had undergone F-choline-PET/CT and F-NaF PET/CT less than three weeks apart. Bone volumes were calculated from the segmentations. The network's performance was compared with manual segmentations of five bones made by an experienced physician. Accuracy of the spatial overlap between automated CNN-based and manual segmentations of these five bones was assessed using the Sørensen-Dice index (SDI). Reproducibility was evaluated applying the Bland-Altman method.

Results: The median (SD) volumes of the five selected bones were by CNN and manual segmentation: Th7 41 (3.8) and 36 (5.1), L3 76 (13) and 75 (9.2), sacrum 284 (40) and 283 (26), 7th rib 33 (3.9) and 31 (4.8), sternum 80 (11) and 72 (9.2), respectively. Median SDIs were 0.86 (Th7), 0.85 (L3), 0.88 (sacrum), 0.84 (7th rib) and 0.83 (sternum). The intraobserver volume difference was less with CNN-based than manual approach: Th7 2% and 14%, L3 7% and 8%, sacrum 1% and 3%, 7th rib 1% and 6%, sternum 3% and 5%, respectively. The average volume difference measured as ratio volume difference/mean volume between the two CNN-based segmentations was 5-6% for the vertebral column and ribs and ≤3% for other bones.

Conclusion: The new deep learning-based method for automated segmentation of bones in CT scans provided highly accurate bone volumes in a fast and automated way and, thus, appears to be a valuable first step in the development of a clinical useful processing procedure providing reliable skeletal segmentation as a key part of quantification of skeletal metastases.
View Article and Find Full Text PDF

Download full-text PDF

Source
http://dx.doi.org/10.1016/j.ejrad.2019.01.028DOI Listing
April 2019

Automated quantification of reference levels in liver and mediastinal blood pool for the Deauville therapy response classification using FDG-PET/CT in Hodgkin and non-Hodgkin lymphomas.

Clin Physiol Funct Imaging 2019 Jan 3;39(1):78-84. Epub 2018 Oct 3.

Department of Clinical Physiology and Nuclear Medicine, Lund University and Skåne University Hospital, Malmö, Sweden.

Background: 18F-FDG-PET/CT has become a standard for assessing treatment response in patients with lymphoma. A subjective interpretation of the scan based on the Deauville 5-point scale has been widely adopted. However, inter-observer variability due to the subjectivity of the interpretation is a limitation. Our main goal is to develop an objective and automated method for evaluating response. The first step is to develop and validate an artificial intelligence (AI)-based method, for the automated quantification of reference levels in the liver and mediastinal blood pool in patients with lymphoma.

Methods: The AI-based method was trained to segment the liver and the mediastinal blood pool in CT images from 80 lymphoma patients, who had undergone 18F-FDG-PET/CT, and apply this to a validation group of six lymphoma patients. CT segmentations were transferred to the PET images to obtain automatic standardized uptake values (SUV). The AI-based analysis was compared to corresponding manual segmentations performed by two radiologists.

Results: The mean difference for the comparison between the AI-based liver SUV quantifications and those of the two radiologists in the validation group was 0·02 and 0·02, respectively, and 0·02 and 0·02 for mediastinal blood pool respectively.

Conclusions: An AI-based method for the automated quantification of reference levels in the liver and mediastinal blood pool shows good agreement with results obtained by experienced radiologists who had manually segmented the CT images. This is a first, promising step towards objective treatment response evaluation in patients with lymphoma based on 18F-FDG-PET/CT.
View Article and Find Full Text PDF

Download full-text PDF

Source
http://dx.doi.org/10.1111/cpf.12546DOI Listing
January 2019

Action sequencing in the spontaneous swimming behavior of zebrafish larvae - implications for drug development.

Sci Rep 2017 06 9;7(1):3191. Epub 2017 Jun 9.

Integrative Neurophysiology and Neurotechnology, NRC, Department of Experimental Medical Sciences, Lund University, Lund, Sweden.

All motile organisms need to organize their motor output to obtain functional goals. In vertebrates, natural behaviors are generally composed of a relatively large set of motor components which in turn are combined into a rich repertoire of complex actions. It is therefore an experimental challenge to investigate the organizational principles of natural behaviors. Using the relatively simple locomotion pattern of 10 days old zebrafish larvae we have here characterized the basic organizational principles governing the swimming behavior. Our results show that transitions between different behavioral states can be described by a model combining a stochastic component with a control signal. By dividing swimming bouts into a limited number of categories, we show that similar types of swimming behavior as well as stand-stills between bouts were temporally clustered, indicating a basic level of action sequencing. Finally, we show that pharmacological manipulations known to induce alterations in the organization of motor behavior in mammals, mainly through basal ganglia interactions, have related effects in zebrafish larvae. This latter finding may be of specific relevance to the field of drug development given the growing importance of zebrafish larvae in phenotypic screening for novel drug candidates acting on central nervous system targets.
View Article and Find Full Text PDF

Download full-text PDF

Source
http://dx.doi.org/10.1038/s41598-017-03144-7DOI Listing
http://www.ncbi.nlm.nih.gov/pmc/articles/PMC5466685PMC
June 2017

3D skeletal uptake of F sodium fluoride in PET/CT images is associated with overall survival in patients with prostate cancer.

EJNMMI Res 2017 Dec 16;7(1):15. Epub 2017 Feb 16.

Department of Translational Medicine, Lund University, Malmö, Sweden.

Background: Sodium fluoride (NaF) positron emission tomography combined with computer tomography (PET/CT) has shown to be more sensitive than the whole-body bone scan in the detection of skeletal uptake due to metastases in prostate cancer. We aimed to calculate a 3D index for NaF PET/CT and investigate its correlation to the bone scan index (BSI) and overall survival (OS) in a group of patients with prostate cancer.

Methods: NaF PET/CT and bone scans were studied in 48 patients with prostate cancer. Automated segmentation of the thoracic and lumbar spines, sacrum, pelvis, ribs, scapulae, clavicles, and sternum were made in the CT images. Hotspots in the PET images were selected using both a manual and an automated method. The volume of each hotspot localized in the skeleton in the corresponding CT image was calculated. Two PET/CT indices, based on manual (manual PET index) and automatic segmenting using a threshold of SUV 15 (automated PET index), were calculated by dividing the sum of all hotspot volumes with the volume of all segmented bones. BSI values were obtained using a software for automated calculations.

Results: BSI, manual PET index, and automated PET index were all significantly associated with OS and concordance indices were 0.68, 0.69, and 0.70, respectively. The median BSI was 0.39 and patients with a BSI >0.39 had a significantly shorter median survival time than patients with a BSI <0.39 (2.3 years vs not reached after 5 years of follow-up [p = 0.01]). The median manual PET index was 0.53 and patients with a manual PET index >0.53 had a significantly shorter median survival time than patients with a manual PET index <0.53 (2.5 years vs not reached after 5 years of follow-up [p < 0.001]). The median automated PET index was 0.11 and patients with an automated PET index >0.11 had a significantly shorter median survival time than patients with an automated PET index <0.11 (2.3 years vs not reached after 5 years of follow-up [p < 0.001]).

Conclusions: PET/CT indices based on NaF PET/CT are correlated to BSI and significantly associated with overall survival in patients with prostate cancer.
View Article and Find Full Text PDF

Download full-text PDF

Source
http://dx.doi.org/10.1186/s13550-017-0264-5DOI Listing
http://www.ncbi.nlm.nih.gov/pmc/articles/PMC5313492PMC
December 2017

Automatic pericardium segmentation and quantification of epicardial fat from computed tomography angiography.

J Med Imaging (Bellingham) 2016 Jul 15;3(3):034003. Epub 2016 Sep 15.

Chalmers University of Technology, Department of Signals and Systems, Hörsalsvägen 9-11, Gothenburg 412 96, Sweden; Lund University, Faculty of Engineering, Centre for Mathematical Sciences, Sölvegatan 18, Lund 221 00, Sweden.

Recent findings indicate a strong correlation between the risk of future heart disease and the volume of adipose tissue inside of the pericardium. So far, large-scale studies have been hindered by the fact that manual delineation of the pericardium is extremely time-consuming and that existing methods for automatic delineation lack accuracy. An efficient and fully automatic approach to pericardium segmentation and epicardial fat volume (EFV) estimation is presented, based on a variant of multi-atlas segmentation for spatial initialization and a random forest classifier for accurate pericardium detection. Experimental validation on a set of 30 manually delineated computer tomography angiography volumes shows a significant improvement on state-of-the-art in terms of EFV estimation [mean absolute EFV difference: 3.8 ml (4.7%), Pearson correlation: 0.99] with run times suitable for large-scale studies (52 s). Further, the results compare favorably with interobserver variability measured on 10 volumes.
View Article and Find Full Text PDF

Download full-text PDF

Source
http://dx.doi.org/10.1117/1.JMI.3.3.034003DOI Listing
http://www.ncbi.nlm.nih.gov/pmc/articles/PMC5023657PMC
July 2016

City-Scale Localization for Cameras with Known Vertical Direction.

IEEE Trans Pattern Anal Mach Intell 2017 07 5;39(7):1455-1461. Epub 2016 Aug 5.

We consider the problem of localizing a novel image in a large 3D model, given that the gravitational vector is known. In principle, this is just an instance of camera pose estimation, but the scale of the problem introduces some interesting challenges. Most importantly, it makes the correspondence problem very difficult so there will often be a significant number of outliers to handle. To tackle this problem, we use recent theoretical as well as technical advances. Many modern cameras and phones have gravitational sensors that allow us to reduce the search space. Further, there are new techniques to efficiently and reliably deal with extreme rates of outliers. We extend these methods to camera pose estimation by using accurate approximations and fast polynomial solvers. Experimental results are given demonstrating that it is possible to reliably estimate the camera pose despite cases with more than 99 percent outlier correspondences in city-scale models with several millions of 3D points.
View Article and Find Full Text PDF

Download full-text PDF

Source
http://dx.doi.org/10.1109/TPAMI.2016.2598331DOI Listing
July 2017

A system for automated tracking of motor components in neurophysiological research.

J Neurosci Methods 2012 Apr 26;205(2):334-44. Epub 2012 Jan 26.

Neuronano Research Center, Department of Experimental Medical Science, BMC F10, Lund University, 22184 Lund, Sweden.

In the study of motor systems it is often necessary to track the movements of an experimental animal in great detail to allow for interpretation of recorded brain signals corresponding to different control signals. This task becomes increasingly difficult when analyzing complex compound movements in freely moving animals. One example of a complex motor behavior that can be studied in rodents is the skilled reaching test where animals are trained to use their forepaws to grasp small food objects, in many ways similar to human hand use. To fully exploit this model in neurophysiological research it is desirable to describe the kinematics at the level of movements around individual joints in 3D space since this permits analyses of how neuronal control signals relate to complex movement patterns. To this end, we have developed an automated system that estimates the paw pose using an anatomical paw model and recorded video images from six different image planes in rats chronically implanted with recording electrodes in neuronal circuits involved in selection and execution of forelimb movements. The kinematic description provided by the system allowed for a decomposition of reaching movements into a subset of motor components. Interestingly, firing rates of individual neurons were found to be modulated in relation to the actuation of these motor components suggesting that sets of motor primitives may constitute building blocks for the encoding of movement commands in motor circuits. The designed system will, thus, enable a more detailed analytical approach in neurophysiological studies of motor systems.
View Article and Find Full Text PDF

Download full-text PDF

Source
http://dx.doi.org/10.1016/j.jneumeth.2012.01.008DOI Listing
April 2012
-->