Publications by authors named "Caifeng Shan"

29 Publications

  • Page 1 of 1

Effect of Fufang Huangqi Decoction on the Gut Microbiota in Patients With Class I or II Myasthenia Gravis.

Front Neurol 2022 18;13:785040. Epub 2022 Mar 18.

The Affiliated Hospital of Liaoning University of Traditional Chinese Medicine, Liaoning Provincial Key Laboratory for Diagnosis and Treatment of Myasthenia Gravis, Liaoning University of Traditional Chinese Medicine, Shenyang, China.

Objective: To investigate the effect of Fufang Huangqi Decoction on the gut microbiota in patients with class I or II myasthenia gravis (MG) and to explore the correlation between gut microbiota and MG (registration number, ChiCTR2100048367; registration website, http://www.chictr.org.cn/listbycreater.aspx; NCBI: SRP338707).

Methods: In this study, microbial community composition and diversity analyses were carried out on fecal specimens from MG patients who did not take Fufang Huangqi Decoction (control group, = 8) and those who took Fufang Huangqi Decoction and achieved remarkable alleviation of symptoms (medication group, = 8). The abundance, diversity within and between habitats, taxonomic differences and corresponding discrimination markers of gut microbiota in the control group and medicated group were assessed.

Results: Compared with the control group, the medicated group showed a significantly decreased abundance of Bacteroidetes ( < 0.05) and significantly increased abundance of Actinobacteria at the phylum level, a significantly decreased abundance of Bacteroidaceae ( < 0.05) and significantly increased abundance of Bifidobacteriaceae at the family level and a significantly decreased abundance of and ( < 0.05) and significantly increased abundance of and at the genus level. Compared to the control group, the medicated group had decreased abundance, diversity, and genetic diversity of the communities and increased coverage, but the differences were not significant ( > 0.05); the markers that differed significantly between communities at the genus level and influenced the differences between groups were and .

Conclusions: MG patients have obvious gut microbiota-associated metabolic disorders. Fufang Huangqi Decoction regulates the gut microbiota in patients with class I or II MG by reducing the abundance of and and increasing the abundance of and . The correlation between gut microbiota and MG may be related to cell-mediated immunity.
View Article and Find Full Text PDF

Download full-text PDF

Source
http://dx.doi.org/10.3389/fneur.2022.785040DOI Listing
http://www.ncbi.nlm.nih.gov/pmc/articles/PMC8971287PMC
March 2022

Constructing Stronger and Faster Baselines for Skeleton-based Action Recognition.

IEEE Trans Pattern Anal Mach Intell 2022 Mar 7;PP. Epub 2022 Mar 7.

One essential problem in skeleton-based action recognition is how to extract discriminative features over all skeleton joints. However, the complexity of the recent State-Of-The-Art (SOTA) models for this task tends to be exceedingly sophisticated and over-parameterized. The low efficiency in model training and inference has increased the validation costs of model architectures in large-scale datasets. To address the above issue, recent advanced separable convolutional layers are embedded into an early fused Multiple Input Branches (MIB) network, constructing an efficient Graph Convolutional Network (GCN) baseline for skeleton-based action recognition. In addition, based on such the baseline, we design a compound scaling strategy to expand the model's width and depth synchronously, and eventually obtain a family of efficient GCN baselines with high accuracies and small amounts of trainable parameters, termed EfficientGCN-Bx, where ''x'' denotes the scaling coefficient. On two large-scale datasets, i.e., NTU RGB+D 60 and 120, the proposed EfficientGCN-B4 baseline outperforms other SOTA methods, e.g., achieving 92.1% accuracy on the cross-subject benchmark of NTU 60 dataset, while being 5.82x smaller and 5.85x faster than MS-G3D, which is one of the SOTA methods. The source code in PyTorch version and the pretrained models are available at https://github.com/yfsong0709/EfficientGCNv1.
View Article and Find Full Text PDF

Download full-text PDF

Source
http://dx.doi.org/10.1109/TPAMI.2022.3157033DOI Listing
March 2022

Weakly-supervised learning for catheter segmentation in 3D frustum ultrasound.

Comput Med Imaging Graph 2022 03 29;96:102037. Epub 2022 Jan 29.

Eindhoven University of Technology, Eindhoven, The Netherlands.

Accurate and efficient catheter segmentation in 3D ultrasound (US) is essential for ultrasound-guided cardiac interventions. State-of-the-art segmentation algorithms, based on convolutional neural networks (CNNs), suffer from high computational cost and large 3D data size for GPU implementation, which are far from satisfactory for real-time applications. In this paper, we propose a novel approach for efficient catheter segmentation in 3D US. Instead of using Cartesian US, our approach performs catheter segmentation in Frustum US (i.e., the US data before scan conversion). Compared to Cartesian US, Frustum US has a much smaller volume size, therefore the catheter can be segmented more efficiently in Frustum US. However, annotating the irregular and deformed Frustum images is challenging, and it is laborious to obtain the voxel-level annotation. To address this, we propose a weakly supervised learning framework, which requires only bounding-box annotations. The labels of the voxels are generated by incorporating class activation maps with line filtering, which are iteratively updated during the training cycles. Our experimental results show that, compared to Cartesian US, the catheter can be segmented much more efficiently in Frustum US (i.e., 0.25 s per volume) with better accuracy. Extensive experiments also validate the effectiveness of the proposed weakly supervised learning method.
View Article and Find Full Text PDF

Download full-text PDF

Source
http://dx.doi.org/10.1016/j.compmedimag.2022.102037DOI Listing
March 2022

Progressive Residual Learning with Memory Upgrade for Ultrasound Image Blind Super-resolution.

IEEE J Biomed Health Inform 2022 Jan 18;PP. Epub 2022 Jan 18.

For clinical medical diagnosis and treatment, image super-resolution (SR) technology will be helpful to improve the ultrasonic imaging quality so as to enhance the accuracy of disease diagnosis. However, due to the differences of sensing devices or transmission media, the resolution degradation process of ultrasound imaging in real scenes is uncontrollable, especially when the blur kernel is usually unknown. This issue makes current endto- end SR networks poor performance when applied to ultrasonic images. Aiming to achieve effective SR in real ultrasound medical scenes, in this work, we propose a blind deep SR method based on progressive residual learning and memory upgrade. Specifically, we estimate the accurate blur kernel from the spatial attention map block of low resolution (LR) ultrasound image through a multi-label classification network, then we construct three modules - up-sampling (US) module, residual learning (RL) model and memory upgrading (MU) model for ultrasound image blind SR. The US module is designed to upscale the input information and the up-sampled residual result will be used for SR reconstruction. The RL module is employed to approximate the original LR and continuously generate the updated residual and feed it to the next US module. The last MU module can store all progressively learned residuals, which offers increased interactions between the US and RL modules, augmenting the details recovery. Extensive experiments and evaluations on the benchmark CCA-US and US-CASE datasets demonstrate the proposed approach achieves better performance against the state-ofthe- art methods.
View Article and Find Full Text PDF

Download full-text PDF

Source
http://dx.doi.org/10.1109/JBHI.2022.3142076DOI Listing
January 2022

Medical Instrument Segmentation in 3D US by Hybrid Constrained Semi-Supervised Learning.

IEEE J Biomed Health Inform 2022 02 4;26(2):762-773. Epub 2022 Feb 4.

Medical instrument segmentation in 3D ultrasound is essential for image-guided intervention. However, to train a successful deep neural network for instrument segmentation, a large number of labeled images are required, which is expensive and time-consuming to obtain. In this article, we propose a semi-supervised learning (SSL) framework for instrument segmentation in 3D US, which requires much less annotation effort than the existing methods. To achieve the SSL learning, a Dual-UNet is proposed to segment the instrument. The Dual-UNet leverages unlabeled data using a novel hybrid loss function, consisting of uncertainty and contextual constraints. Specifically, the uncertainty constraints leverage the uncertainty estimation of the predictions of the UNet, and therefore improve the unlabeled information for SSL training. In addition, contextual constraints exploit the contextual information of the training images, which are used as the complementary information for voxel-wise uncertainty estimation. Extensive experiments on multiple ex-vivo and in-vivo datasets show that our proposed method achieves Dice score of about 68.6%-69.1% and the inference time of about 1 sec. per volume. These results are better than the state-of-the-art SSL methods and the inference time is comparable to the supervised approaches.
View Article and Find Full Text PDF

Download full-text PDF

Source
http://dx.doi.org/10.1109/JBHI.2021.3101872DOI Listing
February 2022

CANet: Context Aware Network for Brain Glioma Segmentation.

IEEE Trans Med Imaging 2021 07 30;40(7):1763-1777. Epub 2021 Jun 30.

Automated segmentation of brain glioma plays an active role in diagnosis decision, progression monitoring and surgery planning. Based on deep neural networks, previous studies have shown promising technologies for brain glioma segmentation. However, these approaches lack powerful strategies to incorporate contextual information of tumor cells and their surrounding, which has been proven as a fundamental cue to deal with local ambiguity. In this work, we propose a novel approach named Context-Aware Network (CANet) for brain glioma segmentation. CANet captures high dimensional and discriminative features with contexts from both the convolutional space and feature interaction graphs. We further propose context guided attentive conditional random fields which can selectively aggregate features. We evaluate our method using publicly accessible brain glioma segmentation datasets BRATS2017, BRATS2018 and BRATS2019. The experimental results show that the proposed algorithm has better or competitive performance against several State-of-The-Art approaches under different segmentation metrics on the training and validation sets.
View Article and Find Full Text PDF

Download full-text PDF

Source
http://dx.doi.org/10.1109/TMI.2021.3065918DOI Listing
July 2021

Impact of makeup on remote-PPG monitoring.

Biomed Phys Eng Express 2020 03 4;6(3):035004. Epub 2020 Mar 4.

Philips Research, High Tech Campus 34, 5656AE Eindhoven, The Netherlands. Department of Electrical Engineering, Eindhoven University of Technology, 5600 MB Eindhoven, The Netherlands.

Camera-based remote photoplethysmography (remote-PPG) enables contactless measurement of blood volume pulse from the human skin. Skin visibility is essential to remote-PPG as the camera needs to capture the light reflected from the skin that penetrates deep into skin tissues and carries blood pulsation information. The use of facial makeup may jeopardize this measurement by reducing the amount of light penetrating into and reflecting from the skin. In this paper, we conduct an empirical study to thoroughly investigate the impact of makeup on remote-PPG monitoring, in both the visible (RGB) and invisible (Near Infrared, NIR) lighting conditions. The experiment shows that makeup has negative influence on remote-PPG, which reduces the relative PPG strength (AC/DC) at different wavelengths and changes the normalized PPG signature across multiple wavelengths. It makes (i) the pulse-rate extraction more difficult in both the RGB and NIR, although NIR is less affected than RGB, and (ii) the blood oxygen saturation extraction in NIR impossible. To the best of our knowledge, this is the first work that systematically investigate the impact of makeup on camera-based remote-PPG monitoring.
View Article and Find Full Text PDF

Download full-text PDF

Source
http://dx.doi.org/10.1088/2057-1976/ab51baDOI Listing
March 2020

Multi-view 3D skin feature recognition and localization for patient tracking in spinal surgery applications.

Biomed Eng Online 2021 Jan 7;20(1). Epub 2021 Jan 7.

Department of Electrical Engineering, Eindhoven University of Technology, Eindhoven, The Netherlands.

Background: Minimally invasive spine surgery is dependent on accurate navigation. Computer-assisted navigation is increasingly used in minimally invasive surgery (MIS), but current solutions require the use of reference markers in the surgical field for both patient and instruments tracking.

Purpose: To improve reliability and facilitate clinical workflow, this study proposes a new marker-free tracking framework based on skin feature recognition.

Methods: Maximally Stable Extremal Regions (MSER) and Speeded Up Robust Feature (SURF) algorithms are applied for skin feature detection. The proposed tracking framework is based on a multi-camera setup for obtaining multi-view acquisitions of the surgical area. Features can then be accurately detected using MSER and SURF and afterward localized by triangulation. The triangulation error is used for assessing the localization quality in 3D.

Results: The framework was tested on a cadaver dataset and in eight clinical cases. The detected features for the entire patient datasets were found to have an overall triangulation error of 0.207 mm for MSER and 0.204 mm for SURF. The localization accuracy was compared to a system with conventional markers, serving as a ground truth. An average accuracy of 0.627 and 0.622 mm was achieved for MSER and SURF, respectively.

Conclusions: This study demonstrates that skin feature localization for patient tracking in a surgical setting is feasible. The technology shows promising results in terms of detected features and localization accuracy. In the future, the framework may be further improved by exploiting extended feature processing using modern optical imaging techniques for clinical applications where patient tracking is crucial.
View Article and Find Full Text PDF

Download full-text PDF

Source
http://dx.doi.org/10.1186/s12938-020-00843-7DOI Listing
http://www.ncbi.nlm.nih.gov/pmc/articles/PMC7792004PMC
January 2021

Hyperspectral Imaging for Glioblastoma Surgery: Improving Tumor Identification Using a Deep Spectral-Spatial Approach.

Sensors (Basel) 2020 Dec 5;20(23). Epub 2020 Dec 5.

Department of Electrical Engineering, Eindhoven University of Technology, 5600 MB Eindhoven, The Netherlands.

The primary treatment for malignant brain tumors is surgical resection. While gross total resection improves the prognosis, a supratotal resection may result in neurological deficits. On the other hand, accurate intraoperative identification of the tumor boundaries may be very difficult, resulting in subtotal resections. Histological examination of biopsies can be used repeatedly to help achieve gross total resection but this is not practically feasible due to the turn-around time of the tissue analysis. Therefore, intraoperative techniques to recognize tissue types are investigated to expedite the clinical workflow for tumor resection and improve outcome by aiding in the identification and removal of the malignant lesion. Hyperspectral imaging (HSI) is an optical imaging technique with the power of extracting additional information from the imaged tissue. Because HSI images cannot be visually assessed by human observers, we instead exploit artificial intelligence techniques and leverage a Convolutional Neural Network (CNN) to investigate the potential of HSI in twelve in vivo specimens. The proposed framework consists of a 3D-2D hybrid CNN-based approach to create a joint extraction of spectral and spatial information from hyperspectral images. A comparison study was conducted exploiting a 2D CNN, a 1D DNN and two conventional classification methods (SVM, and the SVM classifier combined with the 3D-2D hybrid CNN) to validate the proposed network. An overall accuracy of 80% was found when tumor, healthy tissue and blood vessels were classified, clearly outperforming the state-of-the-art approaches. These results can serve as a basis for brain tumor classification using HSI, and may open future avenues for image-guided neurosurgical applications.
View Article and Find Full Text PDF

Download full-text PDF

Source
http://dx.doi.org/10.3390/s20236955DOI Listing
http://www.ncbi.nlm.nih.gov/pmc/articles/PMC7730670PMC
December 2020

Efficient and Robust Instrument Segmentation in 3D Ultrasound Using Patch-of-Interest-FuseNet with Hybrid Loss.

Med Image Anal 2021 01 7;67:101842. Epub 2020 Oct 7.

Eindhoven University of Technology, Eindhoven, the Netherlands.

Instrument segmentation plays a vital role in 3D ultrasound (US) guided cardiac intervention. Efficient and accurate segmentation during the operation is highly desired since it can facilitate the operation, reduce the operational complexity, and therefore improve the outcome. Nevertheless, current image-based instrument segmentation methods are not efficient nor accurate enough for clinical usage. Lately, fully convolutional neural networks (FCNs), including 2D and 3D FCNs, have been used in different volumetric segmentation tasks. However, 2D FCN cannot exploit the 3D contextual information in the volumetric data, while 3D FCN requires high computation cost and a large amount of training data. Moreover, with limited computation resources, 3D FCN is commonly applied with a patch-based strategy, which is therefore not efficient for clinical applications. To address these, we propose a POI-FuseNet, which consists of a patch-of-interest (POI) selector and a FuseNet. The POI selector can efficiently select the interested regions containing the instrument, while FuseNet can make use of 2D and 3D FCN features to hierarchically exploit contextual information. Furthermore, we propose a hybrid loss function, which consists of a contextual loss and a class-balanced focal loss, to improve the segmentation performance of the network. With the collected challenging ex-vivo dataset on RF-ablation catheter, our method achieved a Dice score of 70.5%, superior to the state-of-the-art methods. In addition, based on the pre-trained model from ex-vivo dataset, our method can be adapted to the in-vivo dataset on guidewire and achieves a Dice score of 66.5% for a different cardiac operation. More crucially, with POI-based strategy, segmentation efficiency is reduced to around 1.3 seconds per volume, which shows the proposed method is promising for clinical use.
View Article and Find Full Text PDF

Download full-text PDF

Source
http://dx.doi.org/10.1016/j.media.2020.101842DOI Listing
January 2021

Hyperspectral imaging for colon cancer classification in surgical specimens: towards optical biopsy during image-guided surgery.

Annu Int Conf IEEE Eng Med Biol Soc 2020 07;2020:1169-1173

The main curative treatment for localized colon cancer is surgical resection. However when tumor residuals are left positive margins are found during the histological examinations and additional treatment is needed to inhibit recurrence. Hyperspectral imaging (HSI) can offer non-invasive surgical guidance with the potential of optimizing the surgical effectiveness. In this paper we investigate the capability of HSI for automated colon cancer detection in six ex-vivo specimens employing a spectral-spatial patch-based classification approach. The results demonstrate the feasibility in assessing the benign and malignant boundaries of the lesion with a sensitivity of 0.88 and specificity of 0.78. The results are compared with the state-of-the-art deep learning based approaches. The method with a new hybrid CNN outperforms the state-of the-art approaches (0.74 vs. 0.82 AUC). This study paves the way for further investigation towards improving surgical outcomes with HSI.
View Article and Find Full Text PDF

Download full-text PDF

Source
http://dx.doi.org/10.1109/EMBC44109.2020.9176543DOI Listing
July 2020

Tongue Tumor Detection in Hyperspectral Images Using Deep Learning Semantic Segmentation.

IEEE Trans Biomed Eng 2021 04 18;68(4):1330-1340. Epub 2021 Mar 18.

Objective: The utilization of hyperspectral imaging (HSI) in real-time tumor segmentation during a surgery have recently received much attention, but it remains a very challenging task.

Methods: In this work, we propose semantic segmentation methods, and compare them with other relevant deep learning algorithms for tongue tumor segmentation. To the best of our knowledge, this is the first work using deep learning semantic segmentation for tumor detection in HSI data using channel selection, and accounting for more spatial tissue context, and global comparison between the prediction map, and the annotation per sample. Results, and Conclusion: On a clinical data set with tongue squamous cell carcinoma, our best method obtains very strong results of average dice coefficient, and area under the ROC-curve of [Formula: see text], and [Formula: see text], respectively on the original spatial image size. The results show that a very good performance can be achieved even with a limited amount of data. We demonstrate that important information regarding tumor decision is encoded in various channels, but some channel selection, and filtering is beneficial over the full spectra. Moreover, we use both visual (VIS), and near-infrared (NIR) spectrum, rather than commonly used only VIS spectrum; although VIS spectrum is generally of higher significance, we demonstrate NIR spectrum is crucial for tumor capturing in some cases.

Significance: The HSI technology augmented with accurate deep learning algorithms has a huge potential to be a promising alternative to digital pathology or a doctors' supportive tool in real-time surgeries.
View Article and Find Full Text PDF

Download full-text PDF

Source
http://dx.doi.org/10.1109/TBME.2020.3026683DOI Listing
April 2021

Meta-USR: A Unified Super-Resolution Network for Multiple Degradation Parameters.

IEEE Trans Neural Netw Learn Syst 2021 Sep 31;32(9):4151-4165. Epub 2021 Aug 31.

Recent research on single image super-resolution (SISR) has achieved great success due to the development of deep convolutional neural networks. However, most existing SISR methods merely focus on super-resolution of a single fixed integer scale factor. This simplified assumption does not meet the complex conditions for real-world images which often suffer from various blur kernels or various levels of noise. More importantly, previous methods lack the ability to cope with arbitrary degradation parameters (scale factors, blur kernels, and noise levels) with a single model. A few methods can handle multiple degradation factors, e.g., noninteger scale factors, blurring, and noise, simultaneously within a single SISR model. In this work, we propose a simple yet powerful method termed meta-USR which is the first unified super-resolution network for arbitrary degradation parameters with meta-learning. In Meta-USR, a meta-restoration module (MRM) is proposed to enhance the traditional upscale module with the capability to adaptively predict the weights of the convolution filters for various combinations of degradation parameters. Thus, the MRM can not only upscale the feature maps with arbitrary scale factors but also restore the SR image with different blur kernels and noise levels. Moreover, the lightweight MRM can be placed at the end of the network, which makes it very efficient for iteratively/repeatedly searching the various degradation factors. We evaluate the proposed method through extensive experiments on several widely used benchmark data sets on SISR. The qualitative and quantitative experimental results show the superiority of our Meta-USR.
View Article and Find Full Text PDF

Download full-text PDF

Source
http://dx.doi.org/10.1109/TNNLS.2020.3016974DOI Listing
September 2021

Efficient Medical Instrument Detection in 3D Volumetric Ultrasound Data.

IEEE Trans Biomed Eng 2021 03 18;68(3):1034-1043. Epub 2021 Feb 18.

Ultrasound-guided procedures have been applied in many clinical therapies, such as cardiac catheterization and regional anesthesia. Medical instrument detection in 3D Ultrasound (US) is highly desired, but the existing approaches are far from real-time performance. Our objective is to investigate an efficient instrument detection method in 3D US for practical clinical use. We propose a novel Multi-dimensional Mixed Network for efficient instrument detection in 3D US, which extracts the discriminating features at 3D full-image level by a 3D encoder, and then applies a specially designed dimension reduction block to reduce the spatial complexity of the feature maps by projecting from 3D space into 2D space. A 2D decoder is adopted to detect the instrument along the specified axes. By projecting the predicted 2D outputs, the instrument is detected or visualized in the 3D volume. Furthermore, to enable the network to better learn the discriminative information, we propose a multi-level loss function to capture both pixel- and image-level differences. We carried out extensive experiments on two datasets for two tasks: (1) catheter detection for cardiac RF-ablation and (2) needle detection for regional anesthesia. Our experiments show that our proposed method achieves a detection error of 2-3 voxels with an efficiency of about 0.12 sec per 3D US volume. The proposed method is 3-8 times faster than the state-of-the-art methods, leading to real-time performance. The results show that our proposed method has significant clinical value for real-time 3D US-guided intervention.
View Article and Find Full Text PDF

Download full-text PDF

Source
http://dx.doi.org/10.1109/TBME.2020.2999729DOI Listing
March 2021

Towards Optical Imaging for Spine Tracking without Markers in Navigated Spine Surgery.

Sensors (Basel) 2020 Jun 29;20(13). Epub 2020 Jun 29.

Department of Electrical Engineering, Eindhoven University of Technology, Eindhoven 5600 MB, The Netherlands.

Surgical navigation systems are increasingly used for complex spine procedures to avoid neurovascular injuries and minimize the risk for reoperations. Accurate patient tracking is one of the prerequisites for optimal motion compensation and navigation. Most current optical tracking systems use dynamic reference frames (DRFs) attached to the spine, for patient movement tracking. However, the spine itself is subject to intrinsic movements which can impact the accuracy of the navigation system. In this study, we aimed to detect the actual patient spine features in different image views captured by optical cameras, in an augmented reality surgical navigation (ARSN) system. Using optical images from open spinal surgery cases, acquired by two gray-scale cameras, spinal landmarks were identified and matched in different camera views. A computer vision framework was created for preprocessing of the spine images, detecting and matching local invariant image regions. We compared four feature detection algorithms, Speeded Up Robust Feature (SURF), Maximal Stable Extremal Region (MSER), Features from Accelerated Segment Test (FAST), and Oriented FAST and Rotated BRIEF (ORB) to elucidate the best approach. The framework was validated in 23 patients and the 3D triangulation error of the matched features was < 0 . 5 mm. Thus, the findings indicate that spine feature detection can be used for accurate tracking in navigated surgery.
View Article and Find Full Text PDF

Download full-text PDF

Source
http://dx.doi.org/10.3390/s20133641DOI Listing
http://www.ncbi.nlm.nih.gov/pmc/articles/PMC7374436PMC
June 2020

Automatic and Continuous Discomfort Detection for Premature Infants in a NICU Using Video-Based Motion Analysis.

Annu Int Conf IEEE Eng Med Biol Soc 2019 Jul;2019:5995-5999

Frequent pain and discomfort in premature infants can lead to long-term adverse neurodevelopmental outcomes. Video-based monitoring is considered to be a promising contactless method for identification of discomfort moments. In this study, we propose a video-based method for automated detection of infant discomfort. The method is based on analyzing facial and body motion. Therefore, motion trajectories are estimated from frame to frame using optical flow. For each video segment, we further calculate the motion acceleration rate and extract 18 time- and frequency-domain features characterizing motion patterns. A support vector machine (SVM) classifier is then applied to video sequences to recognize infant status of comfort or discomfort. The method is evaluated using 183 video segments for 11 infants from 17 heel prick events. Experimental results show an AUC of 0.94 for discomfort detection and the average accuracy of 0.86 when combining all proposed features, which is promising for clinical use.
View Article and Find Full Text PDF

Download full-text PDF

Source
http://dx.doi.org/10.1109/EMBC.2019.8857597DOI Listing
July 2019

Towards non-invasive patient tracking: optical image analysis for spine tracking during spinal surgery procedures.

Annu Int Conf IEEE Eng Med Biol Soc 2019 Jul;2019:3909-3914

Surgical navigation systems can enhance surgeon vision and form a reliable image-guided tool for complex interventions as spinal surgery. The main prerequisite is successful patient tracking which implies optimal motion compensation. Nowadays, optical tracking systems can satisfy the need of detecting patient position during surgery, allowing navigation without the risk of damaging neurovascular structures. However, the spine is subject to vertebrae movements which can impact the accuracy of the system. The aim of this paper is to investigate the feasibility of a novel approach for offering a direct relationship to movements of the spinal vertebra during surgery. To this end, we detect and track patient spine features between different image views, captured by several optical cameras, for vertebrae rotation and displacement reconstruction. We analyze patient images acquired in a real surgical scenario by two gray-scale cameras, embedded in the flat-panel detector of the C-arm. Spine segmentation is performed and anatomical landmarks are designed and tracked between different views, while experimenting with several feature detection algorithms (e.g. SURF, MSER, etc.). The 3D positions for the matched features are reconstructed and the triangulation errors are computed for an accuracy assessment. The analysis of the triangulation accuracy reveals a mean error of 0.38 mm, which demonstrates the feasibility of spine tracking and strengthens the clinical application of optical imaging for spinal navigation.
View Article and Find Full Text PDF

Download full-text PDF

Source
http://dx.doi.org/10.1109/EMBC.2019.8856304DOI Listing
July 2019

Fusion of augmented reality imaging with the endoscopic view for endonasal skull base surgery; a novel application for surgical navigation based on intraoperative cone beam computed tomography and optical tracking.

PLoS One 2020 16;15(1):e0227312. Epub 2020 Jan 16.

Eindhoven University of Technology (TU/e), Eindhoven, The Netherlands.

Objective: Surgical navigation is a well-established tool in endoscopic skull base surgery. However, navigational and endoscopic views are usually displayed on separate monitors, forcing the surgeon to focus on one or the other. Aiming to provide real-time integration of endoscopic and diagnostic imaging information, we present a new navigation technique based on augmented reality with fusion of intraoperative cone beam computed tomography (CBCT) on the endoscopic view. The aim of this study was to evaluate the accuracy of the method.

Material And Methods: An augmented reality surgical navigation system (ARSN) with 3D CBCT capability was used. The navigation system incorporates an optical tracking system (OTS) with four video cameras embedded in the flat detector of the motorized C-arm. Intra-operative CBCT images were fused with the view of the surgical field obtained by the endoscope's camera. Accuracy of CBCT image co-registration was tested using a custom-made grid with incorporated 3D spheres.

Results: Co-registration of the CBCT image on the endoscopic view was performed. Accuracy of the overlay, measured as mean target registration error (TRE), was 0.55 mm with a standard deviation of 0.24 mm and with a median value of 0.51mm and interquartile range of 0.39--0.68 mm.

Conclusion: We present a novel augmented reality surgical navigation system, with fusion of intraoperative CBCT on the endoscopic view. The system shows sub-millimeter accuracy.
View Article and Find Full Text PDF

Download full-text PDF

Source
http://journals.plos.org/plosone/article?id=10.1371/journal.pone.0227312PLOS
http://www.ncbi.nlm.nih.gov/pmc/articles/PMC6964902PMC
May 2020

RGB-T Salient Object Detection via Fusing Multi-level CNN Features.

IEEE Trans Image Process 2019 Dec 17. Epub 2019 Dec 17.

RGB-induced salient object detection has recently witnessed substantial progress, which is attributed to the superior feature learning capability of deep convolutional neural networks (CNNs). However, such detections suffer from challenging scenarios characterized by cluttered backgrounds, low-light conditions and variations in illumination. Instead of improving RGB based saliency detection, this paper takes advantage of the complementary benefits of RGB and thermal infrared images. Specifically, we propose a novel end-to-end network for multi-modal salient object detection, which turns the challenge of RGB-T saliency detection to a CNN feature fusion problem. To this end, a backbone network (e.g., VGG-16) is first adopted to extract the coarse features from each RGB or thermal infrared image individually, and then several adjacent-depth feature combination (ADFC) modules are designed to extract multi-level refined features for each single-modal input image, considering that features captured at different depths differ in semantic information and visual details. Subsequently, a multi-branch group fusion (MGF) module is employed to capture the cross-modal features by fusing those features from ADFC modules for a RGB-T image pair at each level. Finally, a joint attention guided bi-directional message passing (JABMP) module undertakes the task of saliency prediction via integrating the multi-level fused features from MGF modules. Experimental results on several public RGB-T salient object detection datasets demonstrate the superiorities of our proposed algorithm over the state-of-the-art approaches, especially under challenging conditions, such as poor illumination, complex background and low contrast.
View Article and Find Full Text PDF

Download full-text PDF

Source
http://dx.doi.org/10.1109/TIP.2019.2959253DOI Listing
December 2019

Detecting discomfort in infants through facial expressions.

Physiol Meas 2019 12 3;40(11):115006. Epub 2019 Dec 3.

Eindhoven University of Technology, Eindhoven, 5612 WH, The Netherlands.

Objective: Detecting discomfort status of infants is particularly clinically relevant. Late treatment of discomfort infants can lead to adverse problems such as abnormal brain development, central nervous system damage and changes in responsiveness of the neuroendocrine and immune systems to stress at maturity. In this study, we exploit deep convolutional neural network (CNN) algorithms to address the problem of discomfort detection for infants by analyzing their facial expressions.

Approach: A dataset of 55 videos about facial expressions, recorded from 24 infants, is used in our study. Given the limited available data for training, we employ a pre-trained CNN model, which is followed by fine-tuning the networks using a public dataset with labeled facial expressions (the shoulder-pain dataset). The CNNs are further refined with our data of infants.

Main Results: Using a two-fold cross-validation, we achieve an area under the curve (AUC) value of 0.96, which is substantially higher than the results without any pre-training steps (AUC  =  0.77). Our method also achieves better results than the existing method based on handcrafted features. By fusing individual frame results, the AUC is further improved from 0.96 to 0.98.

Significance: The proposed system has great potential for continuous discomfort and pain monitoring in clinical practice.
View Article and Find Full Text PDF

Download full-text PDF

Source
http://dx.doi.org/10.1088/1361-6579/ab55b3DOI Listing
December 2019

Deep Unbiased Embedding Transfer for Zero-shot Learning.

IEEE Trans Image Process 2019 Oct 22. Epub 2019 Oct 22.

Zero-shot learning aims to recognize objects which do not appear in the training dataset. Previous prevalent mapping-based zero-shot learning methods suffer from the projection domain shift problem due to the lack of image classes in the training stage. In order to alleviate the projection domain shift problem, a deep unbiased embedding transfer (DUET) model is proposed in this paper. The DUET model is composed of a deep embedding transfer (DET) module and an unseen visual feature generation (UVG) module. In the DET module, a novel combined embedding transfer net which integrates the complementary merits of the linear and nonlinear embedding mapping functions is proposed to connect the visual space and semantic space. What's more, the end-to-end joint training process is implemented to train the visual feature extractor and the combined embedding transfer net simultaneously. In the UVG module, a visual feature generator trained with a conditional generative adversarial framework is used to synthesize the visual features of the unseen classes to ease the disturbance of the projection domain shift problem. Furthermore, a quantitative index, namely the score of resistance on domain shift (ScoreRDS), is proposed to evaluate different models regarding their resistance capability on the projection domain shift problem. The experiments on five zero-shot learning benchmarks verify the effectiveness of the proposed DUET model. As demonstrated by the qualitative and quantitative analysis, the unseen class visual feature generation, the combined embedding transfer net and the end-to-end joint training process all contribute to alleviating projection domain shift in zero-shot learning.
View Article and Find Full Text PDF

Download full-text PDF

Source
http://dx.doi.org/10.1109/TIP.2019.2947780DOI Listing
October 2019

Toward assessment of resection margins using hyperspectral diffuse reflection imaging (400-1,700 nm) during tongue cancer surgery.

Lasers Surg Med 2020 07 15;52(6):496-502. Epub 2019 Sep 15.

Department of Surgery, Netherlands Cancer Institute, Antoni van Leeuwenhoek, Amsterdam, The Netherlands.

Background And Objectives: There is a clinical need to assess the resection margins of tongue cancer specimens, intraoperatively. In the current ex vivo study, we evaluated the feasibility of hyperspectral diffuse reflectance imaging (HSI) for distinguishing tumor from the healthy tongue tissue.

Study Design/materials And Methods: Fresh surgical specimens (n = 14) of squamous cell carcinoma of the tongue were scanned with two hyperspectral cameras that cover the visible and near-infrared spectrum (400-1,700 nm). Each pixel of the hyperspectral image represents a measure of the diffuse optical reflectance. A neural network was used for tissue-type prediction of the hyperspectral images of the visual and near-infrared data sets separately as well as both data sets combined.

Results: HSI was able to distinguish tumor from muscle with a good accuracy. The diagnostic performance of both wavelength ranges (sensitivity/specificity of visual and near-infrared were 84%/80% and 77%/77%, respectively) appears to be comparable and there is no additional benefit of combining the two wavelength ranges (sensitivity and specificity were 83%/76%).

Conclusions: HSI has a strong potential for intra-operative assessment of tumor resection margins of squamous cell carcinoma of the tongue. This may optimize surgery, as the entire resection surface can be scanned in a single run and the results can be readily available. Lasers Surg. Med. © 2019 Wiley Periodicals, Inc.
View Article and Find Full Text PDF

Download full-text PDF

Source
http://dx.doi.org/10.1002/lsm.23161DOI Listing
July 2020

Deep Salient Object Detection with Contextual Information Guidance.

IEEE Trans Image Process 2019 Jul 30. Epub 2019 Jul 30.

Integration of multi-level contextual information, such as feature maps and side outputs, is crucial for Convolutional Neural Networks (CNNs) based salient object detection. However, most existing methods either simply concatenate multi-level feature maps or calculate element-wise addition of multi-level side outputs, thus failing to take full advantages of them. In this work, we propose a new strategy for guiding multi-level contextual information integration, where feature maps and side outputs across layers are fully engaged. Specifically, shallower-level feature maps are guided by the deeper-level side outputs to learn more accurate properties of the salient object. In turn, the deeper-level side outputs can be propagated to high-resolution versions with spatial details complemented by means of shallower-level feature maps. Moreover, a group convolution module is proposed with the aim to achieve high-discriminative feature maps, in which the backbone feature maps are divided into a number of groups and then the convolution is applied to the channels of backbone feature maps within each group. Eventually, the group convolution module is incorporated in the guidance module to further promote the guidance role. Experiments on three public benchmark datasets verify the effectiveness and superiority of the proposed method over the state-of-the-art methods.
View Article and Find Full Text PDF

Download full-text PDF

Source
http://dx.doi.org/10.1109/TIP.2019.2930906DOI Listing
July 2019

Catheter localization in 3D ultrasound using voxel-of-interest-based ConvNets for cardiac intervention.

Int J Comput Assist Radiol Surg 2019 Jun 9;14(6):1069-1077. Epub 2019 Apr 9.

Eindhoven University of Technology, Eindhoven, The Netherlands.

Purpose: Efficient image-based catheter localization in 3D US during cardiac interventions is highly desired, since it facilitates the operation procedure, reduces the patient risk and improves the outcome. Current image-based catheter localization methods are not efficient or accurate enough for real clinical use.

Methods: We propose a catheter localization method for 3D cardiac ultrasound (US). The catheter candidate voxels are first pre-selected by the Frangi vesselness filter with adaptive thresholding, after which a triplanar-based ConvNet is applied to classify the remaining voxels as catheter or not. We propose a Share-ConvNet for 3D US, which reduces the computation complexity by sharing a single ConvNet for all orthogonal slices. To boost the performance of ConvNet, we also employ two-stage training with weighted cross-entropy. Using the classified voxels, the catheter is localized by a model fitting algorithm.

Results: To validate our method, we have collected challenging ex vivo datasets. Extensive experiments show that the proposed method outperforms state-of-the-art methods and can localize the catheter with an average error of 2.1 mm in around 10 s per volume.

Conclusion: Our method can automatically localize the cardiac catheter in challenging 3D cardiac US images. The efficiency and accuracy localization of the proposed method are considered promising for catheter detection and localization during clinical interventions.
View Article and Find Full Text PDF

Download full-text PDF

Source
http://dx.doi.org/10.1007/s11548-019-01960-yDOI Listing
http://www.ncbi.nlm.nih.gov/pmc/articles/PMC6544608PMC
June 2019

Catheter segmentation in three-dimensional ultrasound images by feature fusion and model fitting.

J Med Imaging (Bellingham) 2019 Jan 14;6(1):015001. Epub 2019 Jan 14.

Eindhoven University of Technology, VCA Research Group, Eindhoven, The Netherlands.

Ultrasound (US) has been increasingly used during interventions, such as cardiac catheterization. To accurately identify the catheter inside US images, extra training for physicians and sonographers is needed. As a consequence, automated segmentation of the catheter in US images and optimized presentation viewing to the physician can be beneficial to accelerate the efficiency and safety of interventions and improve their outcome. For cardiac catheterization, a three-dimensional (3-D) US image is potentially attractive because of no radiation modality and richer spatial information. However, due to a limited spatial resolution of 3-D cardiac US and complex anatomical structures inside the heart, image-based catheter segmentation is challenging. We propose a cardiac catheter segmentation method in 3-D US data through image processing techniques. Our method first applies a voxel-based classification through newly designed multiscale and multidefinition features, which provide a robust catheter voxel segmentation in 3-D US. Second, a modified catheter model fitting is applied to segment the curved catheter in 3-D US images. The proposed method is validated with extensive experiments, using different , , and datasets. The proposed method can segment the catheter within an average tip-point error that is smaller than the catheter diameter (1.9 mm) in the volumetric images. Based on automated catheter segmentation and combined with optimal viewing, physicians do not have to interpret US images and can focus on the procedure itself to improve the quality of cardiac intervention.
View Article and Find Full Text PDF

Download full-text PDF

Source
http://dx.doi.org/10.1117/1.JMI.6.1.015001DOI Listing
http://www.ncbi.nlm.nih.gov/pmc/articles/PMC6330658PMC
January 2019

Hybrid Optical Unobtrusive Blood Pressure Measurements.

Sensors (Basel) 2017 Jul 1;17(7). Epub 2017 Jul 1.

Philips Research Eindhoven, 5656 AE Eindhoven, The Netherlands.

Blood pressure (BP) is critical in diagnosing certain cardiovascular diseases such as hypertension. Some previous studies have proved that BP can be estimated by pulse transit time (PTT) calculated by a pair of photoplethysmography (PPG) signals at two body sites. Currently, contact PPG (cPPG) and imaging PPG (iPPG) are two feasible ways to obtain PPG signals. In this study, we proposed a hybrid system (called the ICPPG system) employing both methods that can be implemented on a wearable device, facilitating the measurement of BP in an inconspicuous way. The feasibility of the ICPPG system was validated on a dataset with 29 subjects. It has been proved that the ICPPG system is able to estimate PTT values. Moreover, the PTT measured by the new system shows a correlation on average with BP variations for most subjects, which could facilitate a new generation of BP measurement using wearable and mobile devices.
View Article and Find Full Text PDF

Download full-text PDF

Source
http://dx.doi.org/10.3390/s17071541DOI Listing
http://www.ncbi.nlm.nih.gov/pmc/articles/PMC5539707PMC
July 2017

Ultrasound coefficient of nonlinearity imaging.

IEEE Trans Ultrason Ferroelectr Freq Control 2015 Jul;62(7):1331-41

Imaging the acoustical coefficient of nonlinearity, β, is of interest in several healthcare interventional applications. It is an important feature that can be used for discriminating tissues. In this paper, we propose a nonlinearity characterization method with the goal of locally estimating the coefficient of nonlinearity. The proposed method is based on a 1-D solution of the nonlinear lossy Westerfelt equation, thereby deriving a local relation between β and the pressure wave field. Based on several assumptions, a β imaging method is then presented that is based on the ratio between the harmonic and fundamental fields, thereby reducing the effect of spatial amplitude variations of the speckle pattern. By testing the method on simulated ultrasound pressure fields and an in vitro B-mode ultrasound acquisition, we show that the designed algorithm is able to estimate the coefficient of nonlinearity, and that the tissue types of interest are well discriminable. The proposed imaging method provides a new approach to β estimation, not requiring a special measurement setup or transducer, that seems particularly promising for in vivo imaging.
View Article and Find Full Text PDF

Download full-text PDF

Source
http://dx.doi.org/10.1109/TUFFC.2015.007009DOI Listing
July 2015

Smile detection by boosting pixel differences.

Authors:
Caifeng Shan

IEEE Trans Image Process 2012 Jan 14;21(1):431-6. Epub 2011 Jul 14.

Smile detection in face images captured in unconstrained real-world scenarios is an interesting problem with many potential applications. This paper presents an efficient approach to smile detection, in which the intensity differences between pixels in the grayscale face images are used as features. We adopt AdaBoost to choose and combine weak classifiers based on intensity differences to form a strong classifier. Experiments show that our approach has similar accuracy to the state-of-the-art method but is significantly faster. Our approach provides 85% accuracy by examining 20 pairs of pixels and 88% accuracy with 100 pairs of pixels. We match the accuracy of the Gabor-feature-based support vector machine using as few as 350 pairs of pixels.
View Article and Find Full Text PDF

Download full-text PDF

Source
http://dx.doi.org/10.1109/TIP.2011.2161587DOI Listing
January 2012
-->