Publications by authors named "Xuesheng Bian"

3 Publications

  • Page 1 of 1

DDA-Net: Unsupervised cross-modality medical image segmentation via dual domain adaptation.

Comput Methods Programs Biomed 2022 Jan 14;213:106531. Epub 2021 Nov 14.

Fujian Key Laboratory of Sensing and Computing for Smart Cities, Department of Computer Science, School of Informatics, Xiamen University, Xiamen 361005, China. Electronic address:

Background And Objective: Deep convolutional networks are powerful tools for single-modality medical image segmentation, whereas generally require semantic labelling or annotation that is laborious and time-consuming. However, domain shift among various modalities critically deteriorates the performance of deep convolutional networks if only trained by single-modality labelling data.

Methods: In this paper, we propose an end-to-end unsupervised cross-modality segmentation network, DDA-Net, for accurate medical image segmentation without semantic annotation or labelling on the target domain. To close the domain gap, different images with domain shift are mapped into a shared domain-invariant representation space. In addition, spatial position information, which benefits the spatial structure consistency for semantic information, is preserved by an introduced cross-modality auto-encoder.

Results: We validated the proposed DDA-Net method on cross-modality medical image datasets of brain images and heart images. The experimental results show that DDA-Net effectively alleviates domain shift and suppresses model degradation.

Conclusions: The proposed DDA-Net successfully closes the domain gap between different modalities of medical image, and achieves state-of-the-art performance in cross-modality medical image segmentation. It also can be generalized for other semi-supervised or unsupervised segmentation tasks in some other field.
View Article and Find Full Text PDF

Download full-text PDF

Source
http://dx.doi.org/10.1016/j.cmpb.2021.106531DOI Listing
January 2022

A deep learning model for detection and tracking in high-throughput images of organoid.

Comput Biol Med 2021 07 25;134:104490. Epub 2021 May 25.

Fujian Key Laboratory of Sensing and Computing for Smart City, School of Informatics, Xiamen University, Xiamen, 361005, China. Electronic address:

Organoid, an in vitro 3D culture, has extremely high similarity with its source organ or tissue, which creates a model in vitro that simulates the in vivo environment. Organoids have been extensively studied in cell biology, precision medicine, drug toxicity, efficacy tests, etc., which have been proven to have high research value. Periodic observation of organoids in microscopic images to obtain morphological or growth characteristics is essential for organoid research. It is difficult and time-consuming to perform manual screens for organoids, but there is no better solution in the prior art. In this paper, we established the first high-throughput organoid image dataset for organoids detection and tracking, which experienced experts annotate in detail. Moreover, we propose a novel deep neural network (DNN) that effectively detects organoids and dynamically tracks them throughout the entire culture. We divided our solution into two steps: First, the high-throughput sequential images are processed frame by frame to detect all organoids; Second, the similarities of the organoids in the adjacent frames are computed, and the organoids on the adjacent frames are matched in pairs. With the help of our proposed dataset, our model achieves organoids detection and tracking with fast speed and high accuracy, effectively reducing the burden on researchers. To our knowledge, this is the first exploration of applying deep learning to organoid tracking tasks. Experiments have demonstrated that our proposed method achieved satisfactory results on organoid detection and tracking, verifying the great potential of deep learning technology in this field.
View Article and Find Full Text PDF

Download full-text PDF

Source
http://dx.doi.org/10.1016/j.compbiomed.2021.104490DOI Listing
July 2021

Optic disc and optic cup segmentation based on anatomy guided cascade network.

Comput Methods Programs Biomed 2020 Dec 27;197:105717. Epub 2020 Aug 27.

Fujian Key Laboratory of Sensing and Computing for Smart Cities, Department of Computer Science, School of Informatics, Xiamen University, Xiamen 361005, China.

Background And Objective: Glaucoma, a worldwide eye disease, may cause irreversible vision damage. If not treated properly at an early stage, glaucoma eventually deteriorates into blindness. Various glaucoma screening methods, e.g. Ultrasound Biomicroscopy (UBM), Optical Coherence Tomography (OCT), and Heidelberg Retinal Scanner (HRT), are available. However, retinal fundus image photography examination, because of its low cost, is one of the most common solutions used to diagnose glaucoma. Clinically, the cup-to-disk ratio is an important indicator in glaucoma diagnosis. Therefore, precise fundus image segmentation to calculate the cup-to-disk ratio is the basis for screening glaucoma.

Methods: In this paper, we propose a deep neural network that uses anatomical knowledge to guide the segmentation of fundus images, which accurately segments the optic cup and the optic disc in a fundus image to accurately calculate the cup-to-disk ratio. Optic disc and optic cup segmentation are typical small target segmentation problems in biomedical images. We propose to use an attention-based cascade network to effectively accelerate the convergence of small target segmentation during training and accurately reserve detailed contours of small targets.

Results: Our method, which was validated in the MICCAI REFUGE fundus image segmentation competition, achieves 93.31% dice score in optic disc segmentation and 88.04% dice score in optic cup segmentation. Moreover, we win a high CDR evaluation score, which is useful for glaucoma screening.

Conclusions: The proposed method successfully introduce anatomical knowledge into segmentation task, and achieve state-of-the-art performance in fundus image segmentation. It also can be used for both automatic segmentation and semiautomatic segmentation with human interaction.
View Article and Find Full Text PDF

Download full-text PDF

Source
http://dx.doi.org/10.1016/j.cmpb.2020.105717DOI Listing
December 2020
-->