Publications by authors named "Jinman Kim"

94 Publications

Predicting distant metastases in soft-tissue sarcomas from PET-CT scans using constrained hierarchical multi-modality feature learning.

Phys Med Biol 2021 Nov 24. Epub 2021 Nov 24.

The University of Sydney, Sydney, 2006, AUSTRALIA.

Objective: Positron emission tomography-computed tomography (PET-CT) is regarded as the imaging modality of choice for the management of soft-tissue sarcomas (STSs). Distant metastases (DM) are the leading cause of death in STS patients and early detection is important to effectively manage tumors with surgery, radiotherapy and chemotherapy. In this study, we aim to early detect DM in patients with STS using their PET-CT data.

Approach: We derive a new convolutional neural network (CNN) method for early DM detection. The novelty of our method is the introduction of a constrained hierarchical multi-modality feature learning approach to integrate functional imaging (PET) features with anatomical imaging (CT) features. In addition, we removed the reliance on manual input, e.g., tumor delineation, for extracting imaging features.

Main Results: Our experimental results on a well-established benchmark PET-CT dataset show that our method achieved the highest accuracy (0.896) and AUC (0.903) scores when compared to the state-of-the-art methods (unpaired student's t-test p-value < 0.05).

Significance: Our method could be an effective and supportive tool to aid physicians in tumor quantification and in identifying image biomarkers for cancer treatment.
View Article and Find Full Text PDF

Download full-text PDF

Source
http://dx.doi.org/10.1088/1361-6560/ac3d17DOI Listing
November 2021

An Evaluation of the Physical and Chemical Stability of Dry Bottom Ash as a Concrete Light Weight Aggregate.

Materials (Basel) 2021 Sep 14;14(18). Epub 2021 Sep 14.

Environment-Friendly Concrete Research Institute, Kongju National University, 275 Cheonan-daero, Cheonan City 330-717, Chungcheongnam-do, Korea.

Compared to the bottom ash obtained by a water-cooling system (wBA), dry process bottom ash (dBA) makes hardly any unburnt carbon because of its stay time at the bottom of the boiler and contains less chloride because there is no contact with seawater. Accordingly, to identify the chemical stability of dBA as a lightweight aggregate for construction purposes, the chemical properties of dBA were evaluated through the following process of the reviewing engineering properties of a lightweight aggregate (LWA). Typically, river gravel and crushed gravel have been used as coarse aggregates due to their physical and chemical stability. The coal ash and LWA, however, have a variety of chemical compositions, and they have specific chemical properties including SO, unburnt coal and heavy metal content. As the minimum requirement to use the coal ash and lightweight aggregate with various chemical properties for concrete aggregate, the loss on ignition, the SO content and the amount of chloride should be examined, and it is also necessary to examine heavy metal leaching even though it is not included in the standard specifications in Korea. Based on the results, it is believed that there are no significant physical and chemical problems using dBA as a lightweight aggregate for concrete.
View Article and Find Full Text PDF

Download full-text PDF

Source
http://dx.doi.org/10.3390/ma14185291DOI Listing
http://www.ncbi.nlm.nih.gov/pmc/articles/PMC8468078PMC
September 2021

Digital mapping of a manual fabrication method for paediatric ankle-foot orthoses.

Sci Rep 2021 Sep 24;11(1):19068. Epub 2021 Sep 24.

University of Sydney School of Health Sciences, Faculty of Medicine and Health & Children's Hospital at Westmead, University of Sydney, Sydney, NSW, Australia.

Ankle-foot orthoses (AFOs) are devices prescribed to improve mobility in people with neuromuscular disorders. Traditionally, AFOs are manually fabricated by an orthotist based on a plaster impression of the lower leg which is modified to correct for impairments. This study aimed to digitally analyse this manual modification process, an important first step in understanding the craftsmanship of AFO fabrication to inform the digital workflows (i.e. 3D scanning and 3D printing), as viable alternatives for AFO fabrication. Pre- and post-modified lower limb plaster casts of 50 children aged 1-18 years from a single orthotist were 3D scanned and registered. The Euclidean distance between the pre- and post-modified plaster casts was calculated, and relationships with participant characteristics (age, height, AFO type, and diagnosis) were analysed. Modification maps demonstrated that participant-specific modifications were combined with universally applied modifications on the cast's anterior and plantar surfaces. Positive differences (additions) ranged 2.12-3.81 mm, negative differences (subtractions) ranged 0.76-3.60 mm, with mean differences ranging from 1.37 to 3.12 mm. Height had a medium effect on plaster additions (r = 0.35). We quantified the manual plaster modification process and demonstrated a reliable method to map and compare pre- and post-modified casts used to fabricate children's AFOs.
View Article and Find Full Text PDF

Download full-text PDF

Source
http://dx.doi.org/10.1038/s41598-021-98786-zDOI Listing
http://www.ncbi.nlm.nih.gov/pmc/articles/PMC8463714PMC
September 2021

The Checkpoint Program: Collaborative Care to Reduce the Reliance of Frequent Presenters on ED.

Int J Integr Care 2021 Jun 22;21(2):29. Epub 2021 Jun 22.

Nepean Blue Mountains Local Health District, AU.

Introduction: Growing pressures upon Emergency Departments [ED] call for new ways of working with frequent presenters who, although small in number, place extensive demands on services, to say nothing of the costs and consequences for the patients themselves. EDs are often poorly equipped to address the multi-dimensional nature of patient need and the complex circumstances surrounding repeated presentation. Employing a model of intensive short-term community-based case management, the Checkpoint program sought to improve care coordination for this patient group, thereby reducing their reliance on ED.

Method: This study employed a single group interrupted time series design, evaluating patient engagement with the program and year-on-year individual differences in the number of ED visits pre and post enrolment. Associated savings were also estimated.

Results: Prior to intervention, there were two dominant modes in the ED presentation trends of patients. One group had a steady pattern with ≥7 presentations in each of the last four years. The other group had an increasing trend in presentations, peaking in the 12 months immediately preceding enrolment. Following the intervention, both groups demonstrated two consecutive year-on-year reductions. By the second year, and from an overall peak of 22.5 presentations per patient per annum, there was a 53% reduction in presentations. This yielded approximate savings of $7100 per patient.

Discussion: Efforts to improve care coordination, when combined with proactive case management in the community, can impact positively on ED re-presentation rates, provided they are concerted, sufficiently intensive and embed the principles of integration.

Conclusion: The Checkpoint program demonstrated sufficient promise to warrant further exploration of its sustainability. However, health services have yet to determine the ideal organisational structures and funding arrangements to support such initiatives.
View Article and Find Full Text PDF

Download full-text PDF

Source
http://dx.doi.org/10.5334/ijic.5532DOI Listing
http://www.ncbi.nlm.nih.gov/pmc/articles/PMC8231479PMC
June 2021

Machine Learning Algorithms, Applied to Intact Islets of Langerhans, Demonstrate Significantly Enhanced Insulin Staining at the Capillary Interface of Human Pancreatic β Cells.

Metabolites 2021 Jun 7;11(6). Epub 2021 Jun 7.

Charles Perkins Centre, School of Medical Sciences, University of Sydney, Camperdown 2006, Australia.

Pancreatic β cells secrete the hormone insulin into the bloodstream and are critical in the control of blood glucose concentrations. β cells are clustered in the micro-organs of the islets of Langerhans, which have a rich capillary network. Recent work has highlighted the intimate spatial connections between β cells and these capillaries, which lead to the targeting of insulin secretion to the region where the β cells contact the capillary basement membrane. In addition, β cells orientate with respect to the capillary contact point and many proteins are differentially distributed at the capillary interface compared with the rest of the cell. Here, we set out to develop an automated image analysis approach to identify individual β cells within intact islets and to determine if the distribution of insulin across the cells was polarised. Our results show that a U-Net machine learning algorithm correctly identified β cells and their orientation with respect to the capillaries. Using this information, we then quantified insulin distribution across the β cells to show enrichment at the capillary interface. We conclude that machine learning is a useful analytical tool to interrogate large image datasets and analyse sub-cellular organisation.
View Article and Find Full Text PDF

Download full-text PDF

Source
http://dx.doi.org/10.3390/metabo11060363DOI Listing
http://www.ncbi.nlm.nih.gov/pmc/articles/PMC8229564PMC
June 2021

Automatic left ventricular cavity segmentation via deep spatial sequential network in 4D computed tomography.

Comput Med Imaging Graph 2021 07 9;91:101952. Epub 2021 Jun 9.

School of Computer Science, University of Sydney, NSW 2006, Australia. Electronic address:

Automated segmentation of left ventricular cavity (LVC) in temporal cardiac image sequences (consisting of multiple time-points) is a fundamental requirement for quantitative analysis of cardiac structural and functional changes. Deep learning methods for segmentation are the state-of-the-art in performance; however, these methods are generally formulated to work on a single time-point, and thus disregard the complementary information available from the temporal image sequences that can aid in segmentation accuracy and consistency across the time-points. In particular, single time-point segmentation methods perform poorly in segmenting the end-systole (ES) phase image in the cardiac sequence, where the left ventricle deforms to the smallest irregular shape, and the boundary between the blood chamber and the myocardium becomes inconspicuous and ambiguous. To overcome these limitations in automatically segmenting temporal LVCs, we present a spatial sequential network (SS-Net) to learn the deformation and motion characteristics of the LVCs in an unsupervised manner; these characteristics are then integrated with sequential context information derived from bi-directional learning (BL) where both chronological and reverse-chronological directions of the image sequence are used. Our experimental results on a cardiac computed tomography (CT) dataset demonstrate that our spatial-sequential network with bi-directional learning (SS-BL-Net) outperforms existing methods for spatiotemporal LVC segmentation.
View Article and Find Full Text PDF

Download full-text PDF

Source
http://dx.doi.org/10.1016/j.compmedimag.2021.101952DOI Listing
July 2021

Deep Cognitive Gate: Resembling Human Cognition for Saliency Detection.

IEEE Trans Pattern Anal Mach Intell 2021 Mar 23;PP. Epub 2021 Mar 23.

Saliency detection by human refers to the ability to identify pertinent information using our perceptive and cognitive capabilities. While human perception is attracted by visual stimuli, our cognitive capability is derived from the inspiration of constructing concepts of reasoning. Saliency detection has gained intensive interest with the aim of resembling human perceptual system. However, saliency related to human cognition, particularly the analysis of complex salient regions (cogitating process), is yet to be fully exploited. We propose to resemble human cognition, coupled with human perception, to improve saliency detection. We recognize saliency in three phases (Seeing - Perceiving - Cogitating), mimicking human's perceptive and cognitive thinking of an image. In our method, Seeing phase is related to human perception, and we formulate the Perceiving and Cogitating phases related to the human cognition systems via deep neural networks (DNNs) to construct a new module (Cognitive Gate) that enhances the DNN features for saliency detection. To the best of our knowledge, this is the first work that established DNNs to resemble human cognition for saliency detection. In our experiments, our approach outperformed 17 benchmarking DNN methods on six well-recognized datasets, demonstrating that resembling human cognition improves saliency detection.
View Article and Find Full Text PDF

Download full-text PDF

Source
http://dx.doi.org/10.1109/TPAMI.2021.3068277DOI Listing
March 2021

Recurrent feature fusion learning for multi-modality pet-ct tumor segmentation.

Comput Methods Programs Biomed 2021 May 11;203:106043. Epub 2021 Mar 11.

School of Computer Science, University of Sydney, NSW, Australia; Australian Research Council Training Centre for Innovative Bioengineering, NSW, Australia. Electronic address:

Background And Objective: [18f]-fluorodeoxyglucose (fdg) positron emission tomography - computed tomography (pet-ct) is now the preferred imaging modality for staging many cancers. Pet images characterize tumoral glucose metabolism while ct depicts the complementary anatomical localization of the tumor. Automatic tumor segmentation is an important step in image analysis in computer aided diagnosis systems. Recently, fully convolutional networks (fcns), with their ability to leverage annotated datasets and extract image feature representations, have become the state-of-the-art in tumor segmentation. There are limited fcn based methods that support multi-modality images and current methods have primarily focused on the fusion of multi-modality image features at various stages, i.e., early-fusion where the multi-modality image features are fused prior to fcn, late-fusion with the resultant features fused and hyper-fusion where multi-modality image features are fused across multiple image feature scales. Early- and late-fusion methods, however, have inherent, limited freedom to fuse complementary multi-modality image features. The hyper-fusion methods learn different image features across different image feature scales that can result in inaccurate segmentations, in particular, in situations where the tumors have heterogeneous textures.

Methods: we propose a recurrent fusion network (rfn), which consists of multiple recurrent fusion phases to progressively fuse the complementary multi-modality image features with intermediary segmentation results derived at individual recurrent fusion phases: (1) the recurrent fusion phases iteratively learn the image features and then refine the subsequent segmentation results; and, (2) the intermediary segmentation results allows our method to focus on learning the multi-modality image features around these intermediary segmentation results, which minimize the risk of inconsistent feature learning.

Results: we evaluated our method on two pathologically proven non-small cell lung cancer pet-ct datasets. We compared our method to the commonly used fusion methods (early-fusion, late-fusion and hyper-fusion) and the state-of-the-art pet-ct tumor segmentation methods on various network backbones (resnet, densenet and 3d-unet). Our results show that the rfn provides more accurate segmentation compared to the existing methods and is generalizable to different datasets.

Conclusions: we show that learning through multiple recurrent fusion phases allows the iterative re-use of multi-modality image features that refines tumor segmentation results. We also identify that our rfn produces consistent segmentation results across different network architectures.
View Article and Find Full Text PDF

Download full-text PDF

Source
http://dx.doi.org/10.1016/j.cmpb.2021.106043DOI Listing
May 2021

A Mobile App and Dashboard for Early Detection of Infectious Disease Outbreaks: Development Study.

JMIR Public Health Surveill 2021 03 9;7(3):e14837. Epub 2021 Mar 9.

School of Computer Science, The University of Sydney, Darlington, Australia.

Background: Outbreaks of infectious diseases pose great risks, including hospitalization and death, to public health. Therefore, improving the management of outbreaks is important for preventing widespread infection and mitigating associated risks. Mobile health technology provides new capabilities that can help better capture, monitor, and manage infectious diseases, including the ability to quickly identify potential outbreaks.

Objective: This study aims to develop a new infectious disease surveillance (IDS) system comprising a mobile app for accurate data capturing and dashboard for better health care planning and decision making.

Methods: We developed the IDS system using a 2-pronged approach: a literature review on available and similar disease surveillance systems to understand the fundamental requirements and face-to-face interviews to collect specific user requirements from the local public health unit team at the Nepean Hospital, Nepean Blue Mountains Local Health District, New South Wales, Australia.

Results: We identified 3 fundamental requirements when designing an electronic IDS system, which are the ability to capture and report outbreak data accurately, completely, and in a timely fashion. We then developed our IDS system based on the workflow, scope, and specific requirements of the public health unit team. We also produced detailed design and requirement guidelines. In our system, the outbreak data are captured and sent from anywhere using a mobile device or a desktop PC (web interface). The data are processed using a client-server architecture and, therefore, can be analyzed in real time. Our dashboard is designed to provide a daily, weekly, monthly, and historical summary of outbreak information, which can be potentially used to develop a future intervention plan. Specific information about certain outbreaks can also be visualized interactively to understand the unique characteristics of emerging infectious diseases.

Conclusions: We demonstrated the design and development of our IDS system. We suggest that the use of a mobile app and dashboard will simplify the overall data collection, reporting, and analysis processes, thereby improving the public health responses and providing accurate registration of outbreak information. Accurate data reporting and collection are a major step forward in creating a better intervention plan for future outbreaks of infectious diseases.
View Article and Find Full Text PDF

Download full-text PDF

Source
http://dx.doi.org/10.2196/14837DOI Listing
http://www.ncbi.nlm.nih.gov/pmc/articles/PMC7988388PMC
March 2021

Multimodal Spatial Attention Module for Targeting Multimodal PET-CT Lung Tumor Segmentation.

IEEE J Biomed Health Inform 2021 09 3;25(9):3507-3516. Epub 2021 Sep 3.

Multimodal positron emission tomography-computed tomography (PET-CT) is used routinely in the assessment of cancer. PET-CT combines the high sensitivity for tumor detection of PET and anatomical information from CT. Tumor segmentation is a critical element of PET-CT but at present, the performance of existing automated methods for this challenging task is low. Segmentation tends to be done manually by different imaging experts, which is labor-intensive and prone to errors and inconsistency. Previous automated segmentation methods largely focused on fusing information that is extracted separately from the PET and CT modalities, with the underlying assumption that each modality contains complementary information. However, these methods do not fully exploit the high PET tumor sensitivity that can guide the segmentation. We introduce a deep learning-based framework in multimodal PET-CT segmentation with a multimodal spatial attention module (MSAM). The MSAM automatically learns to emphasize regions (spatial areas) related to tumors and suppress normal regions with physiologic high-uptake from the PET input. The resulting spatial attention maps are subsequently employed to target a convolutional neural network (CNN) backbone for segmentation of areas with higher tumor likelihood from the CT image. Our experimental results on two clinical PET-CT datasets of non-small cell lung cancer (NSCLC) and soft tissue sarcoma (STS) validate the effectiveness of our framework in these different cancer types. We show that our MSAM, with a conventional U-Net backbone, surpasses the state-of-the-art lung tumor segmentation approach by a margin of 7.6% in Dice similarity coefficient (DSC).
View Article and Find Full Text PDF

Download full-text PDF

Source
http://dx.doi.org/10.1109/JBHI.2021.3059453DOI Listing
September 2021

Vestibule segmentation from CT images with integration of multiple deep feature fusion strategies.

Comput Med Imaging Graph 2021 04 27;89:101872. Epub 2021 Jan 27.

Department of Radiology, Beijing Friendship Hospital, Capital Medical University, Beijing, China.

Vestibule Segmentation is of great significance for the clinical diagnosis of congenital ear malformations and cochlear implants. However, automated segmentation is a challenging task due to the tiny size, blur boundary, and drastic changes in shape and size. In this paper, a vestibule segmentation method from CT images has been proposed specifically, which exploits different deep feature fusion strategies, including convolutional feature fusion for different receptive fields, channel attention based feature channel fusion, and encoder-decoder feature fusion. The experimental results on the self-established vestibule segmentation dataset show that, compared with several state-of-the-art methods, our method can achieve superior segmentation accuracy.
View Article and Find Full Text PDF

Download full-text PDF

Source
http://dx.doi.org/10.1016/j.compmedimag.2021.101872DOI Listing
April 2021

Living Donor-Recipient Pair Matching for Liver Transplant via Ternary Tree Representation With Cascade Incremental Learning.

IEEE Trans Biomed Eng 2021 08 16;68(8):2540-2551. Epub 2021 Jul 16.

Visual understanding of liver vessels anatomy between the living donor-recipient (LDR) pair can assist surgeons to optimize transplant planning by avoiding non-targeted arteries which can cause severe complications. We propose to visually analyze the anatomical variants of the liver vessels anatomy to maximize similarity for finding a suitable Living Donor-Recipient (LDR) pair. Liver vessels are segmented from computed tomography angiography (CTA) volumes by employing a cascade incremental learning (CIL) model. Our CIL architecture is able to find optimal solutions, which we use to update the model with liver vessel CTA images. A novel ternary tree based algorithm is proposed to map all the possible liver vessel variants into their respective tree topologies. The tree topologies of the recipient's and donor's liver vessels are then used for an appropriate matching. The proposed algorithm utilizes a set of defined vessel tree variants which are updated to maintain the maximum matching options by leveraging the accurate segmentation results of the vessels derived from the incremental learning ability of the CIL. We introduce a novel concept of in-order digital string based comparison to match the geometry of two anatomically varied trees. Experiments through visual illustrations and quantitative analysis demonstrated the effectiveness of our approach compared to state-of-the-art.
View Article and Find Full Text PDF

Download full-text PDF

Source
http://dx.doi.org/10.1109/TBME.2021.3050310DOI Listing
August 2021

Supporting patients to be involved in decisions about their health and care: Development of a best practice health literacy App for Australian adults living with Chronic Kidney Disease.

Health Promot J Austr 2021 Feb 19;32 Suppl 1:115-127. Epub 2020 Oct 19.

Sydney School of Public Health, Faculty of Medicine and Health, The University of Sydney, Sydney, Australia.

Issue Addressed: Inadequate health literacy is common in those with chronic kidney disease (CKD), especially among culturally and linguistically diverse groups. Patient information for people with CKD, including those with kidney failure requiring dialysis, is often written beyond their literacy level, and many CKD-related apps are not accurate or evidence based. These represent important barriers to health care decision-making and equity in access to health care.

Methods: We developed a cross-platform application (the "SUCCESS app") to support Australian adults with kidney failure requiring dialysis to actively participate in self-management and decision-making. App content was informed by health literacy theory which recognises the importance of reducing the complexity of health information as well as equipping consumers with the skills necessary to access, understand and act on this information. The development team comprised members of diverse backgrounds and expertise, including nursing, allied health, psychology, epidemiology, nephrology and IT, as well as consumer representatives.

Results: Content areas included diet, fluids, medicine, physical activity, emotional well-being and supportive care, chosen as they represent important decision points in the CKD trajectory. To support functional health literacy, a four-step process to simplify written content was used including calculating readability statistics, applying the Patient Education Materials Assessment Tool, supplementing written information with video and audio content, and incorporating micro-learning and interactive quizzes. To develop communicative and critical health literacy skills, question prompt lists and evidence-based volitional help sheets were included in each module to support question-asking and behaviour change. We also developed animated skills training related to communication, shared decision-making and critical appraisal of health information.

Conclusions: This is the first health literacy informed app developed to promote active patient participation in CKD management and decision-making. Ongoing evaluation of the SUCCESS app through analysis of quantitative and qualitative data will provide valuable insights into the feasibility of implementing the app with dialysis patients, and the impact of the intervention of psychosocial and clinical outcomes. SO WHAT?: Digital health solutions have been found to improve self-management for chronic conditions, and could optimise the use of health care services and patient outcomes.
View Article and Find Full Text PDF

Download full-text PDF

Source
http://dx.doi.org/10.1002/hpja.416DOI Listing
February 2021

Telehealth for Noncritical Patients With Chronic Diseases During the COVID-19 Pandemic.

J Med Internet Res 2020 08 7;22(8):e19493. Epub 2020 Aug 7.

School of Computer Science, The University of Sydney, NSW, Australia.

During the recent coronavirus disease (COVID-19) pandemic, telehealth has received greater attention due to its role in reducing hospital visits from patients with COVID-19 or other conditions, while supporting home isolation in patients with mild symptoms. The needs of patients with chronic diseases tend to be overlooked during the pandemic. With reduced opportunities for routine clinic visits, these patients are adopting various telehealth services such as video consultation and remote monitoring. We advocate for more innovative designs to be considered to enhance patients' feelings of "copresence"-a sense of connection with another interactant via digital technology-with their health care providers during this time. The copresence-enhanced design has been shown to reduce patients' anxiety and increase their confidence in managing their chronic disease condition. It has the potential to reduce the patient's need to reach out to their health care provider during a time when health care resources are being stretched.
View Article and Find Full Text PDF

Download full-text PDF

Source
http://dx.doi.org/10.2196/19493DOI Listing
http://www.ncbi.nlm.nih.gov/pmc/articles/PMC7419134PMC
August 2020

Predicting EGFR mutation subtypes in lung adenocarcinoma using F-FDG PET/CT radiomic features.

Transl Lung Cancer Res 2020 Jun;9(3):549-562

Department of Nuclear Medicine, Fudan University Shanghai Cancer Center, Shanghai Medical College, Fudan University, Shanghai 200032, China.

Background: Identification of epidermal growth factor receptor (EGFR) mutation types is crucial before tyrosine kinase inhibitors (TKIs) treatment. Radiomics is a new strategy to noninvasively predict the genetic status of cancer. In this study, we aimed to develop a predictive model based on F-fluorodeoxyglucose positron emission tomography-computed tomography (F-FDG PET/CT) radiomic features to identify the specific EGFR mutation subtypes.

Methods: We retrospectively studied F-FDG PET/CT images of 148 patients with isolated lung lesions, which were scanned in two hospitals with different CT scan setting (slice thickness: 3 and 5 mm, respectively). The tumor regions were manually segmented on PET/CT images, and 1,570 radiomic features (1,470 from CT and 100 from PET) were extracted from the tumor regions. Seven hundred and ninety-four radiomic features insensitive to different CT settings were first selected using the Mann white U test, and collinear features were further removed from them by recursively calculating the variation inflation factor. Then, multiple supervised machine learning models were applied to identify prognostic radiomic features through: (I) a multi-variate random forest to select features of high importance in discriminating different EGFR mutation status; (II) a logistic regression model to select features of the highest predictive value of the EGFR subtypes. The EGFR mutation predicting model was constructed from prognostic radiomic features using the popular Xgboost machine-learning algorithm and validated using 3-fold cross-validation. The performance of predicting model was analyzed using the receiver operating characteristic curve (ROC) and measured with the area under the curve (AUC).

Results: Two sets of prognostic radiomic features were found for specific EGFR mutation subtypes: 5 radiomic features for EGFR exon 19 deletions, and 5 radiomic features for EGFR exon 21 L858R missense. The corresponding radiomic predictors achieved the prediction accuracies of 0.77 and 0.92 in terms of AUC, respectively. Combing these two predictors, the overall model for predicting EGFR mutation positivity was also constructed, and the AUC was 0.87.

Conclusions: In our study, we established predictive models based on radiomic analysis of 18F-FDG PET/CT images. And it achieved a satisfying prediction power in the identification of EGFR mutation status as well as the certain EGFR mutation subtypes in lung cancer.
View Article and Find Full Text PDF

Download full-text PDF

Source
http://dx.doi.org/10.21037/tlcr.2020.04.17DOI Listing
http://www.ncbi.nlm.nih.gov/pmc/articles/PMC7354146PMC
June 2020

Unsupervised Domain Adaptation to Classify Medical Images Using Zero-Bias Convolutional Auto-Encoders and Context-Based Feature Augmentation.

IEEE Trans Med Imaging 2020 07 3;39(7):2385-2394. Epub 2020 Feb 3.

The accuracy and robustness of image classification with supervised deep learning are dependent on the availability of large-scale labelled training data. In medical imaging, these large labelled datasets are sparse, mainly related to the complexity in manual annotation. Deep convolutional neural networks (CNNs), with transferable knowledge, have been employed as a solution to limited annotated data through: 1) fine-tuning generic knowledge with a relatively smaller amount of labelled medical imaging data, and 2) learning image representation that is invariant to different domains. These approaches, however, are still reliant on labelled medical image data. Our aim is to use a new hierarchical unsupervised feature extractor to reduce reliance on annotated training data. Our unsupervised approach uses a multi-layer zero-bias convolutional auto-encoder that constrains the transformation of generic features from a pre-trained CNN (for natural images) to non-redundant and locally relevant features for the medical image data. We also propose a context-based feature augmentation scheme to improve the discriminative power of the feature representation. We evaluated our approach on 3 public medical image datasets and compared it to other state-of-the-art supervised CNNs. Our unsupervised approach achieved better accuracy when compared to other conventional unsupervised methods and baseline fine-tuned CNNs.
View Article and Find Full Text PDF

Download full-text PDF

Source
http://dx.doi.org/10.1109/TMI.2020.2971258DOI Listing
July 2020

A New Aggregation of DNN Sparse and Dense Labeling for Saliency Detection.

IEEE Trans Cybern 2020 Jan 19;PP. Epub 2020 Jan 19.

As a fundamental requirement to many computer vision systems, saliency detection has experienced substantial progress in recent years based on deep neural networks (DNNs). Most DNN-based methods rely on either sparse or dense labeling, and thus they are subject to the inherent limitations of the chosen labeling schemes. DNN dense labeling captures salient objects mainly from global features, which are often hampered by other visually distinctive regions. On the other hand, DNN sparse labeling is usually impeded by inaccurate presegmentation of the images that it depends on. To address these limitations, we propose a new framework consisting of two pathways and an Aggregator to progressively integrate the DNN sparse and DNN dense labeling schemes to derive the final saliency map. In our ``zipper'' type aggregation, we propose a multiscale kernels approach to extract optimal criteria for saliency detection where we suppress nonsalient regions in the sparse labeling while guiding the dense labeling to recognize more complete extent of the saliency. We demonstrate that our method outperforms in saliency detection compared to other 11 state-of-the-art methods across six well-recognized benchmarking datasets.
View Article and Find Full Text PDF

Download full-text PDF

Source
http://dx.doi.org/10.1109/TCYB.2019.2963287DOI Listing
January 2020

Normative ultrasound data of the fetal transverse thalamic diameter derived from 18 to 22 weeks of gestation in routine second-trimester morphology examinations.

Australas J Ultrasound Med 2020 Feb 19;23(1):59-65. Epub 2020 Jan 19.

Sydney Medical School Nepean The University of Sydney Penrith 2750 NSW Australia.

Introduction: The thalamus is important for a wide range of sensorimotor and neuropsychiatric functions. Departure from normal reference values of the thalamus may be a biomarker for differences in neurodevelopment outcomes and brain anomalies perinatally. Antenatal measurement of thalamus is not currently included in routine fetal ultrasound as differentiation of thalamic borders is difficult. The aim of this work was to present a method to standardise the thalamus measure and provide normative data of the fetal transverse thalamic diameter between 18 and 22 weeks of gestational age.

Methods: Transverse thalamic diameter was measured by two sonographers on 1,111 stored ultrasound images at the standard transcerebellar plane. A 'guitar' shape representative structure is presented to demarcate the thalamic diameter. The relationship of the transverse thalamic diameter with gestational age, head circumference and transcerebellar diameter using linear regression modelling was assessed, and the mean of the thalamic diameter was calculated and plotted as a reference chart.

Results: Transverse thalamic diameter increased significantly with increasing gestational age, head circumference, and transcerebellar diameter linearly, and normal range thalamic charts are presented. The guitar shape provided good reproducibility of thalamic diameter measures.

Conclusion: Measuring thalamus size in antenatal ultrasound examinations with reference to normative charts could be used to assess midline brain structures and predict neurodevelopment disorders and potentially brain anomalies.
View Article and Find Full Text PDF

Download full-text PDF

Source
http://dx.doi.org/10.1002/ajum.12196DOI Listing
http://www.ncbi.nlm.nih.gov/pmc/articles/PMC8411743PMC
February 2020

A deep learning technique for imputing missing healthcare data.

Annu Int Conf IEEE Eng Med Biol Soc 2019 Jul;2019:6513-6516

Missing data is a frequent occurrence in medical and health datasets. The analysis of datasets with missing data can lead to loss in statistical power or biased results. We address this issue with a novel deep learning technique to impute missing values in health data. Our method extends upon an autoencoder to derive a deep learning architecture that can learn the hidden representations of data even when data is perturbed by missing values (noise). Our model is constructed with overcomplete representation and trained with denoising regularization. This allows the latent/hidden layers of our model to effectively extract the relationships between different variables; these relationships are then used to reconstruct missing values. Our contributions include a new loss function designed to avoid local optima, and this helps the model to learn the real distribution of variables in the dataset. We evaluate our method in comparison with other well-established imputation strategies (mean, median imputation, SVD, KNN, matrix factorization and soft impute) on 48,350 Linked Birth/Infant Death Cohort Data records. Our experiments demonstrate that our method achieved lower imputation mean squared error (MSE=0.00988) compared with other imputation methods (with MSE ranging from 0.02 to 0.08). When assessing the imputation quality using the imputed data for prediction tasks, our experiments show that the data imputed by our method yielded better results (F1=70.37%) compared with other imputation methods (ranging from 66 to 69%).
View Article and Find Full Text PDF

Download full-text PDF

Source
http://dx.doi.org/10.1109/EMBC.2019.8856760DOI Listing
July 2019

Deep multi-modality collaborative learning for distant metastases predication in PET-CT soft-tissue sarcoma studies.

Annu Int Conf IEEE Eng Med Biol Soc 2019 Jul;2019:3658-3688

Soft-tissue Sarcomas (STS) are a heterogeneous group of malignant neoplasms with a relatively high mortality rate from distant metastases. Early prediction or quantitative evaluation of distant metastases risk for patients with STS is an important step which can provide better-personalized treatments and thereby improve survival rates. Positron emission tomography-computed tomography (PET-CT) image is regarded as the imaging modality of choice for the evaluation, staging and assessment of STS. Radiomics, which refers to the extraction and analysis of the quantitative of high-dimensional mineable data from medical images, is foreseen as an important prognostic tool for cancer risk assessment. However, conventional radiomics methods that depend heavily on hand-crafted features (e.g. shape and texture) and prior knowledge (e.g. tuning of many parameters) therefore cannot fully represent the semantic information of the image. In addition, convolutional neural networks (CNN) based radiomics methods present capabilities to improve, but currently, they are mainly designed for single modality e.g., CT or a particular body region e.g., lung structure. In this work, we propose a deep multi-modality collaborative learning to iteratively derive optimal ensembled deep and conventional features from PET-CT images. In addition, we introduce an end-to-end volumetric deep learning architecture to learn complementary PET-CT features optimised for image radiomics. Our experimental results using public PET-CT dataset of STS patients demonstrate that our method has better performance when compared with the state-of-the-art methods.
View Article and Find Full Text PDF

Download full-text PDF

Source
http://dx.doi.org/10.1109/EMBC.2019.8857666DOI Listing
July 2019

Cloud-Based Automated Clinical Decision Support System for Detection and Diagnosis of Lung Cancer in Chest CT.

IEEE J Transl Eng Health Med 2020 4;8:4300113. Epub 2019 Dec 4.

6Biomedical and Multimedia Information Technology Research Group, School of Information TechnologiesThe University of SydneySydneyNSW2006Australia.

Lung cancer is a major cause for cancer-related deaths. The detection of pulmonary cancer in the early stages can highly increase survival rate. Manual delineation of lung nodules by radiologists is a tedious task. We developed a novel computer-aided decision support system for lung nodule detection based on a 3D Deep Convolutional Neural Network (3DDCNN) for assisting the radiologists. Our decision support system provides a second opinion to the radiologists in lung cancer diagnostic decision making. In order to leverage 3-dimensional information from Computed Tomography (CT) scans, we applied median intensity projection and multi-Region Proposal Network (mRPN) for automatic selection of potential region-of-interests. Our Computer Aided Diagnosis (CAD) system has been trained and validated using LUNA16, ANODE09, and LIDC-IDR datasets; the experiments demonstrate the superior performance of our system, attaining sensitivity, specificity, AUROC, accuracy, of 98.4%, 92%, 96% and 98.51% with 2.1 FPs per scan. We integrated cloud computing, trained and validated our Cloud-Based 3DDCNN on the datasets provided by Shanghai Sixth People's Hospital, as well as LUNA16, ANODE09, and LIDC-IDR. Our system outperformed the state-of-the-art systems and obtained an impressive 98.7% sensitivity at 1.97 FPs per scan. This shows the potentials of deep learning, in combination with cloud computing, for accurate and efficient lung nodule detection via CT imaging, which could help doctors and radiologists in treating lung cancer patients.
View Article and Find Full Text PDF

Download full-text PDF

Source
http://dx.doi.org/10.1109/JTEHM.2019.2955458DOI Listing
http://www.ncbi.nlm.nih.gov/pmc/articles/PMC6946021PMC
December 2019

An Automated Framework for Large Scale Retrospective Analysis of Ultrasound Images.

IEEE J Transl Eng Health Med 2019 19;7:1800909. Epub 2019 Nov 19.

1School of Computer ScienceThe University of SydneySydneyNSW2006Australia.

Objective: Large scale retrospective analysis of fetal ultrasound (US) data is important in the understanding of the cumulative impact of antenatal factors on offspring's health outcomes. Although the benefits are evident, there is a paucity of research into such large scale studies as it requires tedious and expensive effort in manual processing of large scale data repositories. This study presents an automated framework to facilitate retrospective analysis of large scale US data repositories.

Method: Our framework consists of four modules: (1) an image classifier to distinguish the Brightness (B) -mode images; (2) a fetal image structure identifier to select US images containing user-defined fetal structures of interest (fSOI); (3) a biometry measurement algorithm to measure the fSOIs in the images and, (4) a visual evaluation module to allow clinicians to validate the outcomes.

Results: We demonstrated our framework using thalamus as the fSOI from a hospital repository of more than 80,000 patients, consisting of 3,816,967 antenatal US files (DICOM objects). Our framework classified 1,869,105 B-mode images and from which 38,786 thalamus images were identified. We selected a random subset of 1290 US files with 558 B-mode (containing 19 thalamus images and the rest being other US data) and evaluated our framework performance. With the evaluation set, B-mode image classification resulted in accuracy, precision, and recall (APR) of 98.67%, 99.75% and 98.57% respectively. For fSOI identification, APR was 93.12%, 97.76% and 80.78% respectively.

Conclusion: We introduced a completely automated approach designed to analyze a large scale data repository to enable retrospective clinical research.
View Article and Find Full Text PDF

Download full-text PDF

Source
http://dx.doi.org/10.1109/JTEHM.2019.2952379DOI Listing
http://www.ncbi.nlm.nih.gov/pmc/articles/PMC6908460PMC
November 2019

An automated segmentation framework for nasal computational fluid dynamics analysis in computed tomography.

Comput Biol Med 2019 12 16;115:103505. Epub 2019 Oct 16.

School of Computer Science, University of Sydney, Australia.

The use of computational fluid dynamics (CFD) to model and predict surgical outcomes in the nasal cavity is becoming increasingly popular. Despite a number of well-known nasal segmentation methods being available, there is currently a lack of an automated, CFD targeted segmentation framework to reliably compute accurate patient-specific nasal models. This paper demonstrates the potential of a robust nasal cavity segmentation framework to automatically segment and produce nasal models for CFD. The framework was evaluated on a clinical dataset of 30 head Computer Tomography (CT) scans, and the outputs of the segmented nasal models were further compared with ground truth models in CFD simulations on pressure drop and particle deposition efficiency. The developed framework achieved a segmentation accuracy of 90.9 DSC, and an average distance error of 0.3 mm. Preliminary CFD simulations revealed similar outcomes between using ground truth and segmented models. Additional analysis still needs to be conducted to verify the accuracy of using segmented models for CFD purposes.
View Article and Find Full Text PDF

Download full-text PDF

Source
http://dx.doi.org/10.1016/j.compbiomed.2019.103505DOI Listing
December 2019

Emotion sharing in remote patient monitoring of patients with chronic kidney disease.

J Am Med Inform Assoc 2020 02;27(2):185-193

School of Computer Science, The University of Sydney, Camperdown, Australia.

Objective: To investigate the relationship between emotion sharing and technically troubled dialysis (TTD) in a remote patient monitoring (RPM) setting.

Materials And Methods: A custom software system was developed for home hemodialysis patients to use in an RPM setting, with focus on emoticon sharing and sentiment analysis of patients' text data. We analyzed the outcome of emoticon and sentiment against TTD. Logistic regression was used to assess the relationship between patients' emotions (emoticon and sentiment) and TTD.

Results: Usage data were collected from January 1, 2015 to June 1, 2018 from 156 patients that actively used the app system, with a total of 31 159 dialysis sessions recorded. Overall, 122 patients (78%) made use of the emoticon feature while 146 patients (94%) wrote at least 1 or more session notes for sentiment analysis. In total, 4087 (13%) sessions were classified as TTD. In the multivariate model, when compared to sessions with self-reported very happy emoticons, those with sad emoticons showed significantly higher associations to TTD (aOR 4.97; 95% CI 4.13-5.99; P = < .001). Similarly, negative sentiments also revealed significant associations to TTD (aOR 1.56; 95% CI 1.22-2; P = .003) when compared to positive sentiments.

Discussion: The distribution of emoticons varied greatly when compared to sentiment analysis outcomes due to the differences in the design features. The emoticon feature was generally easier to understand and quicker to input while the sentiment analysis required patients to manually input their personal thoughts.

Conclusion: Patients on home hemodialysis actively expressed their emotions during RPM. Negative emotions were found to have significant associations with TTD. The use of emoticons and sentimental analysis may be used as a predictive indicator for prolonged TTD.
View Article and Find Full Text PDF

Download full-text PDF

Source
http://dx.doi.org/10.1093/jamia/ocz183DOI Listing
http://www.ncbi.nlm.nih.gov/pmc/articles/PMC7647270PMC
February 2020

Tetracycline Analogs Inhibit Osteoclast Differentiation by Suppressing MMP-9-Mediated Histone H3 Cleavage.

Int J Mol Sci 2019 Aug 19;20(16). Epub 2019 Aug 19.

School of Biological Sciences, College of Natural Sciences, Chungbuk National University, Cheongju, Chungbuk 361-763, Korea.

Osteoporosis is a common disorder of bone remodeling, caused by the imbalance between bone resorption by osteoclasts and bone formation by osteoblasts. Recently, we reported that matrix metalloproteinase-9 (MMP-9)-dependent histone H3 proteolysis is a key event for proficient osteoclast formation. Although it has been reported that several MMP-9 inhibitors, such as tetracycline and its derivatives, show an inhibitory effect on osteoclastogenesis, the molecular mechanisms for this are not fully understood. Here we show that tetracycline analogs, especially tigecycline and minocycline, inhibit osteoclast formation by blocking MMP-9-mediated histone H3 tail cleavage. Our molecular docking approach found that tigecycline and minocycline are the most potent inhibitors of MMP-9. We also observed that both inhibitors significantly inhibited H3 tail cleavage by MMP-9 in vitro. These compounds inhibited receptor activator of nuclear factor kappaB ligand (RANKL)-induced osteoclast formation by blocking the NFATc1 signaling pathway. Furthermore, MMP-9-mediated H3 tail cleavage during osteoclast differentiation was selectively blocked by these compounds. Treatment with both tigecycline and minocycline rescued the osteoporotic phenotype induced by prednisolone in a zebrafish osteoporosis model. Our findings demonstrate that the tetracycline analogs suppress osteoclastogenesis via MMP-9-mediated H3 tail cleavage, and suggest that MMP-9 inhibition could offer a new strategy for the treatment of glucocorticoid-induced osteoporosis.
View Article and Find Full Text PDF

Download full-text PDF

Source
http://dx.doi.org/10.3390/ijms20164038DOI Listing
http://www.ncbi.nlm.nih.gov/pmc/articles/PMC6719029PMC
August 2019

p32 is a negative regulator of p53 tetramerization and transactivation.

Mol Oncol 2019 09 30;13(9):1976-1992. Epub 2019 Jul 30.

Department of Biochemistry and Molecular Medicine, Norris Comprehensive Cancer Center, University of Southern California, Los Angeles, CA, USA.

p53 is a sequence-specific transcription factor, and proper regulation of p53 transcriptional activity is critical for orchestrating different tumor-suppressive mechanisms. p32 is a multifunctional protein which interacts with a large number of viral proteins and transcription factors. Here, we investigate the effect of p32 on p53 transactivation and identify a novel mechanism by which p32 alters the functional characteristics of p53. Specifically, p32 attenuates p53-dependent transcription through impairment of p53 binding to its response elements on target genes. Upon p32 expression, p53 levels bound at target genes are decreased, and p53 target genes are inactivated, strongly indicating that p32 restricts p53 occupancy and function at target genes. The primary mechanism contributing to the observed action of p32 is the ability of p32 to interact with the p53 tetramerization domain and to block p53 tetramerization, which in turn enhances nuclear export and degradation of p53, leading to defective p53 transactivation. Collectively, these data establish p32 as a negative regulator of p53 function and suggest the therapeutic potential of targeting p32 for cancer treatment.
View Article and Find Full Text PDF

Download full-text PDF

Source
http://dx.doi.org/10.1002/1878-0261.12543DOI Listing
http://www.ncbi.nlm.nih.gov/pmc/articles/PMC6717765PMC
September 2019

Convolutional sparse kernel network for unsupervised medical image analysis.

Med Image Anal 2019 08 12;56:140-151. Epub 2019 Jun 12.

School of Computer Science, University of Sydney, NSW, Australia. Electronic address:

The availability of large-scale annotated image datasets and recent advances in supervised deep learning methods enable the end-to-end derivation of representative image features that can impact a variety of image analysis problems. Such supervised approaches, however, are difficult to implement in the medical domain where large volumes of labelled data are difficult to obtain due to the complexity of manual annotation and inter- and intra-observer variability in label assignment. We propose a new convolutional sparse kernel network (CSKN), which is a hierarchical unsupervised feature learning framework that addresses the challenge of learning representative visual features in medical image analysis domains where there is a lack of annotated training data. Our framework has three contributions: (i) we extend kernel learning to identify and represent invariant features across image sub-patches in an unsupervised manner. (ii) We initialise our kernel learning with a layer-wise pre-training scheme that leverages the sparsity inherent in medical images to extract initial discriminative features. (iii) We adapt a multi-scale spatial pyramid pooling (SPP) framework to capture subtle geometric differences between learned visual features. We evaluated our framework in medical image retrieval and classification on three public datasets. Our results show that our CSKN had better accuracy when compared to other conventional unsupervised methods and comparable accuracy to methods that used state-of-the-art supervised convolutional neural networks (CNNs). Our findings indicate that our unsupervised CSKN provides an opportunity to leverage unannotated big data in medical imaging repositories.
View Article and Find Full Text PDF

Download full-text PDF

Source
http://dx.doi.org/10.1016/j.media.2019.06.005DOI Listing
August 2019

Co-Learning Feature Fusion Maps from PET-CT Images of Lung Cancer.

IEEE Trans Med Imaging 2019 Jun 17. Epub 2019 Jun 17.

The analysis of multi-modality positron emission tomography and computed tomography (PET-CT) images for computer aided diagnosis applications (e.g., detection and segmentation) requires combining the sensitivity of PET to detect abnormal regions with anatomical localization from CT. Current methods for PET-CT image analysis either process the modalities separately or fuse information from each modality based on knowledge about the image analysis task. These methods generally do not consider the spatially varying visual characteristics that encode different information across the different modalities, which have different priorities at different locations. For example, a high abnormal PET uptake in the lungs is more meaningful for tumor detection than physiological PET uptake in the heart. Our aim is to improve fusion of the complementary information in multi-modality PET-CT with a new supervised convolutional neural network (CNN) that learns to fuse complementary information for multi-modality medical image analysis. Our CNN first encodes modality-specific features and then uses them to derive a spatially varying fusion map that quantifies the relative importance of each modality's features across different spatial locations. These fusion maps are then multiplied with the modality-specific feature maps to obtain a representation of the complementary multi-modality information at different locations, which can then be used for image analysis. We evaluated the ability of our CNN to detect and segment multiple regions (lungs, mediastinum, tumors) with different fusion requirements using a dataset of PET-CT images of lung cancer. We compared our method to baseline techniques for multi-modality image fusion (fused inputs (FS), multi-branch (MB) techniques, and multichannel (MC) techniques) and segmentation. Our findings show that our CNN had a significantly higher foreground detection accuracy (99.29%, p < 0:05) than the fusion baselines (FS: 99.00%, MB: 99.08%, TC: 98.92%) and a significantly higher Dice score (63.85%) than recent PET-CT tumor segmentation methods.
View Article and Find Full Text PDF

Download full-text PDF

Source
http://dx.doi.org/10.1109/TMI.2019.2923601DOI Listing
June 2019

A web-based multidisciplinary team meeting visualisation system.

Int J Comput Assist Radiol Surg 2019 Dec 21;14(12):2221-2231. Epub 2019 May 21.

Biomedical and Multimedia Information Technology Research Group, School of Computer Science, The University of Sydney, Sydney, Australia.

Purpose: Multidisciplinary team meetings (MDTs) are the standard of care for safe, effective patient management in modern hospital-based clinical practice. Medical imaging data are often the central discussion points in many MDTs, and these data are typically visualised, by all participants, on a common large display. We propose a Web-based MDT visualisation system (WMDT-VS) to allow individual participants to view the data on their own personal computing devices with the potential to customise the imaging data, i.e. different view of the data to that of the common display, for their particular clinical perspective.

Methods: We developed the WMDT-VS by leveraging the state-of-the-art Web technologies to support four MDT visualisation features: (1) 2D and 3D visualisations for multiple imaging modality data; (2) a variety of personal computing devices, e.g. smartphone, tablets, laptops and PCs, to access and navigate medical images individually and share the visualisations; (3) customised participant visualisations; and (4) the addition of extra local image data for visualisation and discussion.

Results: We outlined these MDT visualisation features on two simulated MDT settings using different imaging data and usage scenarios. We measured compatibility and performances of various personal, consumer-level, computing devices.

Conclusions: Our WMDT-VS provides a more comprehensive visualisation experience for MDT participants.
View Article and Find Full Text PDF

Download full-text PDF

Source
http://dx.doi.org/10.1007/s11548-019-01999-xDOI Listing
December 2019

Robust deep learning method for choroidal vessel segmentation on swept source optical coherence tomography images.

Biomed Opt Express 2019 Apr 5;10(4):1601-1612. Epub 2019 Mar 5.

Department of Ophthalmology, Shanghai General Hospital, Shanghai Jiao Tong University School of Medicine, Shanghai, 200080, China.

Accurate choroidal vessel segmentation with swept-source optical coherence tomography (SS-OCT) images provide unprecedented quantitative analysis towards the understanding of choroid-related diseases. Motivated by the leading segmentation performance in medical images from the use of deep learning methods, in this study, we proposed the adoption of a deep learning method, RefineNet, to segment the choroidal vessels from SS-OCT images. We quantitatively evaluated the RefineNet on 40 SS-OCT images consisting of ~3,900 manually annotated choroidal vessels regions. We achieved a segmentation agreement (SA) of 0.840 ± 0.035 with clinician 1 (C1) and 0.823 ± 0.027 with clinician 2 (C2). These results were higher than inter-observer variability measure in SA between C1 and C2 of 0.821 ± 0.037. Our results demonstrated that the choroidal vessels from SS-OCT can be automatically segmented using a deep learning method and thus provided a new approach towards an objective and reproducible quantitative analysis of vessel regions.
View Article and Find Full Text PDF

Download full-text PDF

Source
http://dx.doi.org/10.1364/BOE.10.001601DOI Listing
http://www.ncbi.nlm.nih.gov/pmc/articles/PMC6485000PMC
April 2019
-->