Publications by authors named "Ghassan Hamarneh"

98 Publications

Learnable image histograms-based deep radiomics for renal cell carcinoma grading and staging.

Comput Med Imaging Graph 2021 Apr 21;90:101924. Epub 2021 Apr 21.

BiSICL, University of British Columbia, Vancouver, BC V6T 1Z4, Canada. Electronic address:

Fuhrman cancer grading and tumor-node-metastasis (TNM) cancer staging systems are typically used by clinicians in the treatment planning of renal cell carcinoma (RCC), a common cancer in men and women worldwide. Pathologists typically use percutaneous renal biopsy for RCC grading, while staging is performed by volumetric medical image analysis before renal surgery. Recent studies suggest that clinicians can effectively perform these classification tasks non-invasively by analyzing image texture features of RCC from computed tomography (CT) data. However, image feature identification for RCC grading and staging often relies on laborious manual processes, which is error prone and time-intensive. To address this challenge, this paper proposes a learnable image histogram in the deep neural network framework that can learn task-specific image histograms with variable bin centers and widths. The proposed approach enables learning statistical context features from raw medical data, which cannot be performed by a conventional convolutional neural network (CNN). The linear basis function of our learnable image histogram is piece-wise differentiable, enabling back-propagating errors to update the variable bin centers and widths during training. This novel approach can segregate the CT textures of an RCC in different intensity spectra, which enables efficient Fuhrman low (I/II) and high (III/IV) grading as well as RCC low (I/II) and high (III/IV) staging. The proposed method is validated on a clinical CT dataset of 159 patients from The Cancer Imaging Archive (TCIA) database, and it demonstrates 80% and 83% accuracy in RCC grading and staging, respectively.
View Article and Find Full Text PDF

Download full-text PDF

Source
http://dx.doi.org/10.1016/j.compmedimag.2021.101924DOI Listing
April 2021

Predicting the clinical management of skin lesions using deep learning.

Sci Rep 2021 Apr 8;11(1):7769. Epub 2021 Apr 8.

School of Computing Science, Simon Fraser University, Burnaby, BC V5A 1S6, Canada.

Automated machine learning approaches to skin lesion diagnosis from images are approaching dermatologist-level performance. However, current machine learning approaches that suggest management decisions rely on predicting the underlying skin condition to infer a management decision without considering the variability of management decisions that may exist within a single condition. We present the first work to explore image-based prediction of clinical management decisions directly without explicitly predicting the diagnosis. In particular, we use clinical and dermoscopic images of skin lesions along with patient metadata from the Interactive Atlas of Dermoscopy dataset (1011 cases; 20 disease labels; 3 management decisions) and demonstrate that predicting management labels directly is more accurate than predicting the diagnosis and then inferring the management decision ([Formula: see text] and [Formula: see text] improvement in overall accuracy and AUROC respectively), statistically significant at [Formula: see text]. Directly predicting management decisions also considerably reduces the over-excision rate as compared to management decisions inferred from diagnosis predictions (24.56% fewer cases wrongly predicted to be excised). Furthermore, we show that training a model to also simultaneously predict the seven-point criteria and the diagnosis of skin lesions yields an even higher accuracy (improvements of [Formula: see text] and [Formula: see text] in overall accuracy and AUROC respectively) of management predictions. Finally, we demonstrate our model's generalizability by evaluating on the publicly available MClass-D dataset and show that our model agrees with the clinical management recommendations of 157 dermatologists as much as they agree amongst each other.
View Article and Find Full Text PDF

Download full-text PDF

Source
http://dx.doi.org/10.1038/s41598-021-87064-7DOI Listing
http://www.ncbi.nlm.nih.gov/pmc/articles/PMC8032721PMC
April 2021

Single molecule network analysis identifies structural changes to caveolae and scaffolds due to mutation of the caveolin-1 scaffolding domain.

Sci Rep 2021 Apr 8;11(1):7810. Epub 2021 Apr 8.

Life Sciences Institute, Department of Cellular and Physiological Sciences, University of British Columbia, Vancouver, BC, V6T 1Z3, Canada.

Caveolin-1 (CAV1), the caveolae coat protein, also associates with non-caveolar scaffold domains. Single molecule localization microscopy (SMLM) network analysis distinguishes caveolae and three scaffold domains, hemispherical S2 scaffolds and smaller S1B and S1A scaffolds. The caveolin scaffolding domain (CSD) is a highly conserved hydrophobic region that mediates interaction of CAV1 with multiple effector molecules. F92A/V94A mutation disrupts CSD function, however the structural impact of CSD mutation on caveolae or scaffolds remains unknown. Here, SMLM network analysis quantitatively shows that expression of the CAV1 CSD F92A/V94A mutant in CRISPR/Cas CAV1 knockout MDA-MB-231 breast cancer cells reduces the size and volume and enhances the elongation of caveolae and scaffold domains, with more pronounced effects on S2 and S1B scaffolds. Convex hull analysis of the outer surface of the CAV1 point clouds confirms the size reduction of CSD mutant CAV1 blobs and shows that CSD mutation reduces volume variation amongst S2 and S1B CAV1 blobs at increasing shrink values, that may reflect retraction of the CAV1 N-terminus towards the membrane, potentially preventing accessibility of the CSD. Detection of point mutation-induced changes to CAV1 domains highlights the utility of SMLM network analysis for mesoscale structural analysis of oligomers in their native environment.
View Article and Find Full Text PDF

Download full-text PDF

Source
http://dx.doi.org/10.1038/s41598-021-86770-6DOI Listing
http://www.ncbi.nlm.nih.gov/pmc/articles/PMC8032680PMC
April 2021

Cascaded Localization Regression Neural Nets for Kidney Localization and Segmentation-free Volume Estimation.

IEEE Trans Med Imaging 2021 Feb 19;PP. Epub 2021 Feb 19.

Kidney volume is an essential biomarker for a number of kidney disease diagnoses, for example, chronic kidney disease. Existing total kidney volume estimation methods often rely on an intermediate kidney segmentation step. On the other hand, automatic kidney localization in volumetric medical images is a critical step that often precedes subsequent data processing and analysis. Most current approaches perform kidney localization via an intermediate classification or regression step. This paper proposes an integrated deep learning approach for (i) kidney localization in computed tomography scans and (ii) segmentation-free renal volume estimation. Our localization method uses a selection-convolutional neural network that approximates the kidney inferior-superior span along the axial direction. Cross-sectional (2D) slices from the estimated span are subsequently used in a combined sagittal-axial Mask-RCNN that detects the organ bounding boxes on the axial and sagittal slices, the combination of which produces a final 3D organ bounding box. Furthermore, we use a fully convolutional network to estimate the kidney volume that skips the segmentation procedure. We also present a mathematical expression to approximate the 'volume error' metric from the 'Sørensen-Dice coefficient.' We accessed 100 patients' CT scans from the Vancouver General Hospital records and obtained 210 patients' CT scans from the 2019 Kidney Tumor Segmentation Challenge database to validate our method. Our method produces a kidney boundary wall localization error of ~2.4mm and a mean volume estimation error of ~5%.
View Article and Find Full Text PDF

Download full-text PDF

Source
http://dx.doi.org/10.1109/TMI.2021.3060465DOI Listing
February 2021

Super resolution microscopy and deep learning identify Zika virus reorganization of the endoplasmic reticulum.

Sci Rep 2020 12 1;10(1):20937. Epub 2020 Dec 1.

Life Sciences Institute, University of British Columbia, Vancouver, BC, V6T 1Z3, Canada.

The endoplasmic reticulum (ER) is a complex subcellular organelle composed of diverse structures such as tubules, sheets and tubular matrices. Flaviviruses such as Zika virus (ZIKV) induce reorganization of ER membranes to facilitate viral replication. Here, using 3D super resolution microscopy, ZIKV infection is shown to induce the formation of dense tubular matrices associated with viral replication in the central ER. Viral non-structural proteins NS4B and NS2B associate with replication complexes within the ZIKV-induced tubular matrix and exhibit distinct ER distributions outside this central ER region. Deep neural networks trained to distinguish ZIKV-infected versus mock-infected cells successfully identified ZIKV-induced central ER tubular matrices as a determinant of viral infection. Super resolution microscopy and deep learning are therefore able to identify and localize morphological features of the ER and allow for better understanding of how ER morphology changes due to viral infection.
View Article and Find Full Text PDF

Download full-text PDF

Source
http://dx.doi.org/10.1038/s41598-020-77170-3DOI Listing
http://www.ncbi.nlm.nih.gov/pmc/articles/PMC7708840PMC
December 2020

A Review of Super-Resolution Single-Molecule Localization Microscopy Cluster Analysis and Quantification Methods.

Patterns (N Y) 2020 Jun 12;1(3):100038. Epub 2020 Jun 12.

Medical Image Analysis Lab, School of Computing Science, Simon Fraser University, Burnaby, BC V5A 1S6, Canada.

Single-molecule localization microscopy (SMLM) is a relatively new imaging modality, winning the 2014 Nobel Prize in Chemistry, and considered as one of the key super-resolution techniques. SMLM resolution goes beyond the diffraction limit of light microscopy and achieves resolution on the order of 10-20 nm. SMLM thus enables imaging single molecules and study of the low-level molecular interactions at the subcellular level. In contrast to standard microscopy imaging that produces 2D pixel or 3D voxel grid data, SMLM generates big data of 2D or 3D point clouds with millions of localizations and associated uncertainties. This unprecedented breakthrough in imaging helps researchers employ SMLM in many fields within biology and medicine, such as studying cancerous cells and cell-mediated immunity and accelerating drug discovery. However, SMLM data quantification and interpretation methods have yet to keep pace with the rapid advancement of SMLM imaging. Researchers have been actively exploring new computational methods for SMLM data analysis to extract biosignatures of various biological structures and functions. In this survey, we describe the state-of-the-art clustering methods adopted to analyze and quantify SMLM data and examine the capabilities and shortcomings of the surveyed methods. We classify the methods according to (1) the biological application (i.e., the imaged molecules/structures), (2) the data acquisition (such as imaging modality, dimension, resolution, and number of localizations), and (3) the analysis details (2D versus 3D, field of view versus region of interest, use of machine-learning and multi-scale analysis, biosignature extraction, etc.). We observe that the majority of methods that are based on second-order statistics are sensitive to noise and imaging artifacts, have not been applied to 3D data, do not leverage machine-learning formulations, and are not scalable for big-data analysis. Finally, we summarize state-of-the-art methodology, discuss some key open challenges, and identify future opportunities for better modeling and design of an integrated computational pipeline to address the key challenges.
View Article and Find Full Text PDF

Download full-text PDF

Source
http://dx.doi.org/10.1016/j.patter.2020.100038DOI Listing
http://www.ncbi.nlm.nih.gov/pmc/articles/PMC7660399PMC
June 2020

A fast and fully-automated deep-learning approach for accurate hemorrhage segmentation and volume quantification in non-contrast whole-head CT.

Sci Rep 2020 11 9;10(1):19389. Epub 2020 Nov 9.

Department of Biomedical Physiology and Kinesiology, Simon Fraser University, Burnaby, BC, Canada.

This project aimed to develop and evaluate a fast and fully-automated deep-learning method applying convolutional neural networks with deep supervision (CNN-DS) for accurate hematoma segmentation and volume quantification in computed tomography (CT) scans. Non-contrast whole-head CT scans of 55 patients with hemorrhagic stroke were used. Individual scans were standardized to 64 axial slices of 128 × 128 voxels. Each voxel was annotated independently by experienced raters, generating a binary label of hematoma versus normal brain tissue based on majority voting. The dataset was split randomly into training (n = 45) and testing (n = 10) subsets. A CNN-DS model was built applying the training data and examined using the testing data. Performance of the CNN-DS solution was compared with three previously established methods. The CNN-DS achieved a Dice coefficient score of 0.84 ± 0.06 and recall of 0.83 ± 0.07, higher than patch-wise U-Net (< 0.76). CNN-DS average running time of 0.74 ± 0.07 s was faster than PItcHPERFeCT (> 1412 s) and slice-based U-Net (> 12 s). Comparable interrater agreement rates were observed between "method-human" vs. "human-human" (Cohen's kappa coefficients > 0.82). The fully automated CNN-DS approach demonstrated expert-level accuracy in fast segmentation and quantification of hematoma, substantially improving over previous methods. Further research is warranted to test the CNN-DS solution as a software tool in clinical settings for effective stroke management.
View Article and Find Full Text PDF

Download full-text PDF

Source
http://dx.doi.org/10.1038/s41598-020-76459-7DOI Listing
http://www.ncbi.nlm.nih.gov/pmc/articles/PMC7652921PMC
November 2020

ADFAC: Automatic detection of facial articulatory features.

MethodsX 2020 22;7:101006. Epub 2020 Jul 22.

Language and Brain Lab, Department of Linguistics, Simon Fraser University, Canada.

Using computer-vision and image processing techniques, we aim to identify specific visual cues as induced by facial movements made during monosyllabic speech production. The method is named ADFAC: Automatic Detection of Facial Articulatory Cues. Four facial points of interest were detected automatically to represent head, eyebrow and lip movements: nose tip (proxy for head movement), medial point of left eyebrow, and midpoints of the upper and lower lips. The detected points were then automatically tracked in the subsequent video frames. Critical features such as the distance, velocity, and acceleration describing local facial movements with respect to the resting face of each speaker were extracted from the positional profiles of each tracked point. In this work, a variant of random forest is proposed to determine which facial features are significant in classifying speech sound categories. The method takes in both video and audio as input and extracts features from any video with a plain or simple background. The method is implemented in MATLAB and scripts are made available on GitHub for easy access.•Using innovative computer-vision and image processing techniques to automatically detect and track keypoints on the face during speech production in videos, thus allowing more natural articulation than previous sensor-based approaches.•Measuring multi-dimensional and dynamic facial movements by extracting time-related, distance-related and kinematics-related features in speech production.•Adopting the novel random forest classification approach to determine and rank the significance of facial features toward accurate speech sound categorization.
View Article and Find Full Text PDF

Download full-text PDF

Source
http://dx.doi.org/10.1016/j.mex.2020.101006DOI Listing
http://www.ncbi.nlm.nih.gov/pmc/articles/PMC7393529PMC
July 2020

Multisite Technical and Clinical Performance Evaluation of Quantitative Imaging Biomarkers from 3D FDG PET Segmentations of Head and Neck Cancer Images.

Tomography 2020 06;6(2):65-76

Electrical and Computer Engineering.

Quantitative imaging biomarkers (QIBs) provide medical image-derived intensity, texture, shape, and size features that may help characterize cancerous tumors and predict clinical outcomes. Successful clinical translation of QIBs depends on the robustness of their measurements. Biomarkers derived from positron emission tomography images are prone to measurement errors owing to differences in image processing factors such as the tumor segmentation method used to define volumes of interest over which to calculate QIBs. We illustrate a new Bayesian statistical approach to characterize the robustness of QIBs to different processing factors. Study data consist of 22 QIBs measured on 47 head and neck tumors in 10 positron emission tomography/computed tomography scans segmented manually and with semiautomated methods used by 7 institutional members of the NCI Quantitative Imaging Network. QIB performance is estimated and compared across institutions with respect to measurement errors and power to recover statistical associations with clinical outcomes. Analysis findings summarize the performance impact of different segmentation methods used by Quantitative Imaging Network members. Robustness of some advanced biomarkers was found to be similar to conventional markers, such as maximum standardized uptake value. Such similarities support current pursuits to better characterize disease and predict outcomes by developing QIBs that use more imaging information and are robust to different processing factors. Nevertheless, to ensure reproducibility of QIB measurements and measures of association with clinical outcomes, errors owing to segmentation methods need to be reduced.
View Article and Find Full Text PDF

Download full-text PDF

Source
http://dx.doi.org/10.18383/j.tom.2020.00004DOI Listing
http://www.ncbi.nlm.nih.gov/pmc/articles/PMC7289247PMC
June 2020

Artificial intelligence in glioma imaging: challenges and advances.

J Neural Eng 2020 Apr 30;17(2):021002. Epub 2020 Apr 30.

School of Computing Science, Simon Fraser University, Burnaby, Canada.

Primary brain tumors including gliomas continue to pose significant management challenges to clinicians. While the presentation, the pathology, and the clinical course of these lesions are variable, the initial investigations are usually similar. Patients who are suspected to have a brain tumor will be assessed with computed tomography (CT) and magnetic resonance imaging (MRI). The imaging findings are used by neurosurgeons to determine the feasibility of surgical resection and plan such an undertaking. Imaging studies are also an indispensable tool in tracking tumor progression or its response to treatment. As these imaging studies are non-invasive, relatively cheap and accessible to patients, there have been many efforts over the past two decades to increase the amount of clinically-relevant information that can be extracted from brain imaging. Most recently, artificial intelligence (AI) techniques have been employed to segment and characterize brain tumors, as well as to detect progression or treatment-response. However, the clinical utility of such endeavours remains limited due to challenges in data collection and annotation, model training, and the reliability of AI-generated information. We provide a review of recent advances in addressing the above challenges. First, to overcome the challenge of data paucity, different image imputation and synthesis techniques along with annotation collection efforts are summarized. Next, various training strategies are presented to meet multiple desiderata, such as model performance, generalization ability, data privacy protection, and learning with sparse annotations. Finally, standardized performance evaluation and model interpretability methods have been reviewed. We believe that these technical approaches will facilitate the development of a fully-functional AI tool in the clinical care of patients with gliomas.
View Article and Find Full Text PDF

Download full-text PDF

Source
http://dx.doi.org/10.1088/1741-2552/ab8131DOI Listing
April 2020

ERGO: Efficient Recurrent Graph Optimized Emitter Density Estimation in Single Molecule Localization Microscopy.

IEEE Trans Med Imaging 2020 06 25;39(6):1942-1956. Epub 2019 Dec 25.

Single molecule localization microscopy (SMLM) allows unprecedented insight into the three-dimensional organization of proteins at the nanometer scale. The combination of minimal invasive cell imaging with high resolution positions SMLM at the forefront of scientific discovery in cancer, infectious, and degenerative diseases. By stochastic temporal and spatial separation of light emissions from fluorescent labelled proteins, SMLM is capable of nanometer scale reconstruction of cellular structures. Precise localization of proteins in 3D astigmatic SMLM is dependent on parameter sensitive preprocessing steps to select regions of interest. With SMLM acquisition highly variable over time, it is non-trivial to find an optimal static parameter configuration. The high emitter density required for reconstruction of complex protein structures can compromise accuracy and introduce artifacts. To address these problems, we introduce two modular auto-tuning pre-processing methods: adaptive signal detection and learned recurrent signal density estimation that can leverage the information stored in the sequence of frames that compose the SMLM acquisition process. We show empirically that our contributions improve accuracy, precision and recall with respect to the state of the art. Both modules auto-tune their hyper-parameters to reduce the parameter space for practitioners, improve robustness and reproducibility, and are validated on a reference in silico dataset. Adaptive signal detection and density prediction can offer a practitioner, in addition to informed localization, a tool to tune acquisition parameters ensuring improved reconstruction of the underlying protein complex. We illustrate the challenges faced by practitioners in applying SMLM algorithms on real world data markedly different from the data used in development and show how ERGO can be run on new datasets without retraining while motivating the need for robust transfer learning in SMLM.
View Article and Find Full Text PDF

Download full-text PDF

Source
http://dx.doi.org/10.1109/TMI.2019.2962361DOI Listing
June 2020

Missing MRI Pulse Sequence Synthesis Using Multi-Modal Generative Adversarial Network.

IEEE Trans Med Imaging 2020 04 4;39(4):1170-1183. Epub 2019 Oct 4.

Magnetic resonance imaging (MRI) is being increasingly utilized to assess, diagnose, and plan treatment for a variety of diseases. The ability to visualize tissue in varied contrasts in the form of MR pulse sequences in a single scan provides valuable insights to physicians, as well as enabling automated systems performing downstream analysis. However, many issues like prohibitive scan time, image corruption, different acquisition protocols, or allergies to certain contrast materials may hinder the process of acquiring multiple sequences for a patient. This poses challenges to both physicians and automated systems since complementary information provided by the missing sequences is lost. In this paper, we propose a variant of generative adversarial network (GAN) capable of leveraging redundant information contained within multiple available sequences in order to generate one or more missing sequences for a patient scan. The proposed network is designed as a multi-input, multi-output network which combines information from all the available pulse sequences and synthesizes the missing ones in a single forward pass. We demonstrate and validate our method on two brain MRI datasets each with four sequences, and show the applicability of the proposed method in simultaneously synthesizing all missing sequences in any possible scenario where either one, two, or three of the four sequences may be missing. We compare our approach with competing unimodal and multi-modal methods, and show that we outperform both quantitatively and qualitatively.
View Article and Find Full Text PDF

Download full-text PDF

Source
http://dx.doi.org/10.1109/TMI.2019.2945521DOI Listing
April 2020

Caveolae and scaffold detection from single molecule localization microscopy data using deep learning.

PLoS One 2019 26;14(8):e0211659. Epub 2019 Aug 26.

Medical Image Analysis Lab, School of Computing Science, Simon Fraser University, Burnaby, BC V5A 1S6, Canada.

Caveolae are plasma membrane invaginations whose formation requires caveolin-1 (Cav1), the adaptor protein polymerase I, and the transcript release factor (PTRF or CAVIN1). Caveolae have an important role in cell functioning, signaling, and disease. In the absence of CAVIN1/PTRF, Cav1 forms non-caveolar membrane domains called scaffolds. In this work, we train machine learning models to automatically distinguish between caveolae and scaffolds from single molecule localization microscopy (SMLM) data. We apply machine learning algorithms to discriminate biological structures from SMLM data. Our work is the first that is leveraging machine learning approaches (including deep learning models) to automatically identifying biological structures from SMLM data. In particular, we develop and compare three binary classification methods to identify whether or not a given 3D cluster of Cav1 proteins is a caveolae. The first uses a random forest classifier applied to 28 hand-crafted/designed features, the second uses a convolutional neural net (CNN) applied to a projection of the point clouds onto three planes, and the third uses a PointNet model, a recent development that can directly take point clouds as its input. We validate our methods on a dataset of super-resolution microscopy images of PC3 prostate cancer cells labeled for Cav1. Specifically, we have images from two cell populations: 10 PC3 and 10 CAVIN1/PTRF-transfected PC3 cells (PC3-PTRF cells) that form caveolae. We obtained a balanced set of 1714 different cellular structures. Our results show that both the random forest on hand-designed features and the deep learning approach achieve high accuracy in distinguishing the intrinsic features of the caveolae and non-caveolae biological structures. More specifically, both random forest and deep CNN classifiers achieve classification accuracy reaching 94% on our test set, while the PointNet model only reached 83% accuracy. We also discuss the pros and cons of the different approaches.
View Article and Find Full Text PDF

Download full-text PDF

Source
http://journals.plos.org/plosone/article?id=10.1371/journal.pone.0211659PLOS
http://www.ncbi.nlm.nih.gov/pmc/articles/PMC6709882PMC
March 2020

Super-resolution modularity analysis shows polyhedral caveolin-1 oligomers combine to form scaffolds and caveolae.

Sci Rep 2019 07 8;9(1):9888. Epub 2019 Jul 8.

Department of Cellular and Physiological Sciences, Life Sciences Institute, University of British Columbia, Vancouver, BC, V6T 1Z3, Canada.

Caveolin-1 (Cav1), the coat protein for caveolae, also forms non-caveolar Cav1 scaffolds. Single molecule Cav1 super-resolution microscopy analysis previously identified caveolae and three distinct scaffold domains: smaller S1A and S2B scaffolds and larger hemispherical S2 scaffolds. Application here of network modularity analysis of SMLM data for endogenous Cav1 labeling in HeLa cells shows that small scaffolds combine to form larger scaffolds and caveolae. We find modules within Cav1 blobs by maximizing the intra-connectivity between Cav1 molecules within a module and minimizing the inter-connectivity between Cav1 molecules across modules, which is achieved via spectral decomposition of the localizations adjacency matrix. Features of modules are then matched with intact blobs to find the similarity between the module-blob pairs of group centers. Our results show that smaller S1A and S1B scaffolds are made up of small polygons, that S1B scaffolds correspond to S1A scaffold dimers and that caveolae and hemispherical S2 scaffolds are complex, modular structures formed from S1B and S1A scaffolds, respectively. Polyhedral interactions of Cav1 oligomers, therefore, leads progressively to the formation of larger and more complex scaffold domains and the biogenesis of caveolae.
View Article and Find Full Text PDF

Download full-text PDF

Source
http://dx.doi.org/10.1038/s41598-019-46174-zDOI Listing
http://www.ncbi.nlm.nih.gov/pmc/articles/PMC6614455PMC
July 2019

Combo loss: Handling input and output imbalance in multi-organ segmentation.

Comput Med Imaging Graph 2019 07 9;75:24-33. Epub 2019 May 9.

School of Computing Science, Simon Fraser University, Canada. Electronic address:

Simultaneous segmentation of multiple organs from different medical imaging modalities is a crucial task as it can be utilized for computer-aided diagnosis, computer-assisted surgery, and therapy planning. Thanks to the recent advances in deep learning, several deep neural networks for medical image segmentation have been introduced successfully for this purpose. In this paper, we focus on learning a deep multi-organ segmentation network that labels voxels. In particular, we examine the critical choice of a loss function in order to handle the notorious imbalance problem that plagues both the input and output of a learning model. The input imbalance refers to the class-imbalance in the input training samples (i.e., small foreground objects embedded in an abundance of background voxels, as well as organs of varying sizes). The output imbalance refers to the imbalance between the false positives and false negatives of the inference model. In order to tackle both types of imbalance during training and inference, we introduce a new curriculum learning based loss function. Specifically, we leverage Dice similarity coefficient to deter model parameters from being held at bad local minima and at the same time gradually learn better model parameters by penalizing for false positives/negatives using a cross entropy term. We evaluated the proposed loss function on three datasets: whole body positron emission tomography (PET) scans with 5 target organs, magnetic resonance imaging (MRI) prostate scans, and ultrasound echocardigraphy images with a single target organ i.e., left ventricular. We show that a simple network architecture with the proposed integrative loss function can outperform state-of-the-art methods and results of the competing methods can be improved when our proposed loss is used.
View Article and Find Full Text PDF

Download full-text PDF

Source
http://dx.doi.org/10.1016/j.compmedimag.2019.04.005DOI Listing
July 2019

Special issue on machine learning in medical imaging.

Comput Med Imaging Graph 2019 06 16;74:10-11. Epub 2019 Mar 16.

School of Computing Science, Simon Fraser University, Canada.

View Article and Find Full Text PDF

Download full-text PDF

Source
http://dx.doi.org/10.1016/j.compmedimag.2019.03.003DOI Listing
June 2019

Identification of caveolin-1 domain signatures via machine learning and graphlet analysis of single-molecule super-resolution data.

Bioinformatics 2019 09;35(18):3468-3475

Medical Image Analysis Lab, School of Computing Science, Simon Fraser University, Burnaby, BC, Canada.

Motivation: Network analysis and unsupervised machine learning processing of single-molecule localization microscopy of caveolin-1 (Cav1) antibody labeling of prostate cancer cells identified biosignatures and structures for caveolae and three distinct non-caveolar scaffolds (S1A, S1B and S2). To obtain further insight into low-level molecular interactions within these different structural domains, we now introduce graphlet decomposition over a range of proximity thresholds and show that frequency of different subgraph (k = 4 nodes) patterns for machine learning approaches (classification, identification, automatic labeling, etc.) effectively distinguishes caveolae and scaffold blobs.

Results: Caveolae formation requires both Cav1 and the adaptor protein CAVIN1 (also called PTRF). As a supervised learning approach, we applied a wide-field CAVIN1/PTRF mask to CAVIN1/PTRF-transfected PC3 prostate cancer cells and used the random forest classifier to classify blobs based on graphlet frequency distribution (GFD). GFD of CAVIN1/PTRF-positive (PTRF+) and -negative Cav1 clusters showed poor classification accuracy that was significantly improved by stratifying the PTRF+ clusters by either number of localizations or volume. Low classification accuracy (<50%) of large PTRF+ clusters and caveolae blobs identified by unsupervised learning suggests that their GFD is specific to caveolae. High classification accuracy for small PTRF+ clusters and caveolae blobs argues that CAVIN1/PTRF associates not only with caveolae but also non-caveolar scaffolds. At low proximity thresholds (50-100 nm), the caveolae groups showed reduced frequency of highly connected graphlets and increased frequency of completely disconnected graphlets. GFD analysis of single-molecule localization microscopy Cav1 clusters defines changes in structural organization in caveolae and scaffolds independent of association with CAVIN1/PTRF.

Supplementary Information: Supplementary data are available at Bioinformatics online.
View Article and Find Full Text PDF

Download full-text PDF

Source
http://dx.doi.org/10.1093/bioinformatics/btz113DOI Listing
http://www.ncbi.nlm.nih.gov/pmc/articles/PMC6748737PMC
September 2019

Predictive connectome subnetwork extraction with anatomical and connectivity priors.

Comput Med Imaging Graph 2019 01 25;71:67-78. Epub 2018 Aug 25.

Medical Image Analysis Lab, Simon Fraser University, Burnaby, BC, Canada.

We present a new method to identify anatomical subnetworks of the human connectome that are optimally predictive of targeted clinical variables, developmental outcomes or disease states. Given a training set of structural or functional brain networks, derived from diffusion MRI (dMRI) or functional MRI (fMRI) scans respectively, our sparse linear regression model extracts a weighted subnetwork. By enforcing novel backbone network and connectivity based priors along with a non-negativity constraint, the discovered subnetworks are simultaneously anatomically plausible, well connected, positively weighted and reasonably sparse. We apply our method to (1) predicting the cognitive and neuromotor developmental outcomes of a dataset of 168 structural connectomes of preterm neonates, and (2) predicting the autism spectrum category of a dataset of 1013 resting-state functional connectomes from the Autism Brain Imaging Data Exchange (ABIDE) database. We find that the addition of each of our novel priors improves prediction accuracy and together outperform other state-of-the-art prediction techniques. We then examine the structure of the learned subnetworks in terms of topological features and with respect to established function and physiology of different regions of the brain.
View Article and Find Full Text PDF

Download full-text PDF

Source
http://dx.doi.org/10.1016/j.compmedimag.2018.08.009DOI Listing
January 2019

Automatic localization of normal active organs in 3D PET scans.

Comput Med Imaging Graph 2018 12 29;70:111-118. Epub 2018 Sep 29.

Medical Image Analysis Lab, School of Computing Science, Simon Fraser University, Canada.

PET imaging captures the metabolic activity of tissues and is commonly visually interpreted by clinicians for detecting cancer, assessing tumor progression, and evaluating response to treatment. To automate accomplishing these tasks, it is important to distinguish between normal active organs and activity due to abnormal tumor growth. In this paper, we propose a deep learning method to localize and detect normal active organs visible in a 3D PET scan field-of-view. Our method adapts the deep network architecture of YOLO to detect multiple organs in 2D slices and aggregates the results to produce semantically labeled 3D bounding boxes. We evaluate our method on 479 18F-FDG PET scans of 156 patients achieving an average organ detection precision of 75-98%, recall of 94-100%, average bounding box centroid localization error of less than 14 mm, wall localization error of less than 24 mm and a mean IOU of up to 72%.
View Article and Find Full Text PDF

Download full-text PDF

Source
http://dx.doi.org/10.1016/j.compmedimag.2018.09.008DOI Listing
December 2018

Fully Convolutional Neural Networks to Detect Clinical Dermoscopic Features.

IEEE J Biomed Health Inform 2019 03 1;23(2):578-585. Epub 2018 May 1.

The presence of certain clinical dermoscopic features within a skin lesion may indicate melanoma, and automatically detecting these features may lead to more quantitative and reproducible diagnoses. We reformulate the task of classifying clinical dermoscopic features within superpixels as a segmentation problem, and propose a fully convolutional neural network to detect clinical dermoscopic features from dermoscopy skin lesion images. Our neural network architecture uses interpolated feature maps from several intermediate network layers, and addresses imbalanced labels by minimizing a negative multilabel Dice-F score, where the score is computed across the minibatch for each label. Our approach ranked first place in the 2017 ISIC-ISBI Part 2: Dermoscopic Feature Classification Task, challenge over both the provided validation and test datasets, achieving a 0.895% area under the receiver operator characteristic curve score. We show how simple baseline models can outrank state-of-the-art approaches when using the official metrics of the challenge, and propose to use a fuzzy Jaccard Index that ignores the empty set (i.e., masks devoid of positive pixels) when ranking models. Our results suggest that the classification of clinical dermoscopic features can be effectively approached as a segmentation problem, and the current metrics used to rank models may not well capture the efficacy of the model. We plan to make our trained model and code publicly available.
View Article and Find Full Text PDF

Download full-text PDF

Source
http://dx.doi.org/10.1109/JBHI.2018.2831680DOI Listing
March 2019

7-Point Checklist and Skin Lesion Classification using Multi-Task Multi-Modal Neural Nets.

IEEE J Biomed Health Inform 2018 Apr 9. Epub 2018 Apr 9.

We propose a multi-task deep convolutional neural network, trained on multi-modal data (clinical and dermoscopic images, and patient meta-data), to classify the 7-point melanoma checklist criteria and perform skin lesion diagnosis. Our neural network is trained using several multi-task loss functions, where each loss considers different combinations of the input modalities, which allows our model to be robust to missing data at inference time. Our final model classifies the 7-point checklist and skin condition diagnosis, produces multi-modal feature vectors suitable for image retrieval, and localizes clinically discriminant regions. We benchmark our approach using 1011 lesion cases, and report comprehensive results over all 7-point criteria and diagnosis. We also make our dataset (images and metadata) publicly available online at http://derm.cs.sfu.ca.
View Article and Find Full Text PDF

Download full-text PDF

Source
http://dx.doi.org/10.1109/JBHI.2018.2824327DOI Listing
April 2018

Super Resolution Network Analysis Defines the Molecular Architecture of Caveolae and Caveolin-1 Scaffolds.

Sci Rep 2018 06 13;8(1):9009. Epub 2018 Jun 13.

Medical Image Analysis Lab, School of Computing Science, Simon Fraser University, Burnaby, BC V5A 1S6, Canada.

Quantitative approaches to analyze the large data sets generated by single molecule localization super-resolution microscopy (SMLM) are limited. We developed a computational pipeline and applied it to analyzing 3D point clouds of SMLM localizations (event lists) of the caveolar coat protein, caveolin-1 (Cav1), in prostate cancer cells differentially expressing CAVIN1 (also known as PTRF), that is also required for caveolae formation. High degree (strongly-interacting) points were removed by an iterative blink merging algorithm and Cav1 network properties were compared with randomly generated networks to retain a sub-network of geometric structures (or blobs). Machine-learning based classification extracted 28 quantitative features describing the size, shape, topology and network characteristics of ∼80,000 blobs. Unsupervised clustering identified small S1A scaffolds corresponding to SDS-resistant Cav1 oligomers, as yet undescribed larger hemi-spherical S2 scaffolds and, only in CAVIN1-expressing cells, spherical, hollow caveolae. Multi-threshold modularity analysis suggests that S1A scaffolds interact to form larger scaffolds and that S1A dimers group together, in the presence of CAVIN1, to form the caveolae coat.
View Article and Find Full Text PDF

Download full-text PDF

Source
http://dx.doi.org/10.1038/s41598-018-27216-4DOI Listing
http://www.ncbi.nlm.nih.gov/pmc/articles/PMC5998020PMC
June 2018

Adversarial Stain Transfer for Histopathology Image Analysis.

IEEE Trans Med Imaging 2018 03;37(3):792-802

It is generally recognized that color information is central to the automatic and visual analysis of histopathology tissue slides. In practice, pathologists rely on color, which reflects the presence of specific tissue components, to establish a diagnosis. Similarly, automatic histopathology image analysis algorithms rely on color or intensity measures to extract tissue features. With the increasing access to digitized histopathology images, color variation and its implications have become a critical issue. These variations are the result of not only a variety of factors involved in the preparation of tissue slides but also in the digitization process itself. Consequently, different strategies have been proposed to alleviate stain-related tissue inconsistencies in automatic image analysis systems. Such techniques generally rely on collecting color statistics to perform color matching across images. In this work, we propose a different approach for stain normalization that we refer to as stain transfer. We design a discriminative image analysis model equipped with a stain normalization component that transfers stains across datasets. Our model comprises a generative network that learns data set-specific staining properties and image-specific color transformations as well as a task-specific network (e.g., classifier or segmentation network). The model is trained end-to-end using a multi-objective cost function. We evaluate the proposed approach in the context of automatic histopathology image analysis on three data sets and two different analysis tasks: tissue segmentation and classification. The proposed method achieves superior results in terms of accuracy and quality of normalized images compared to various baselines.
View Article and Find Full Text PDF

Download full-text PDF

Source
http://dx.doi.org/10.1109/TMI.2017.2781228DOI Listing
March 2018

Segmentation-free direct tumor volume and metabolic activity estimation from PET scans.

Comput Med Imaging Graph 2018 01 27;63:52-66. Epub 2017 Dec 27.

Medical Image Analysis Lab, Simon Fraser University, Canada. Electronic address:

Tumor volume and metabolic activity are two robust imaging biomarkers for predicting early therapy response in F-fluorodeoxyglucose (FDG) positron emission tomography (PET), which is a modality to image the distribution of radiotracers and thereby observe functional processes in the body. To date, estimation of these two biomarkers requires a lesion segmentation step. While the segmentation methods requiring extensive user interaction have obvious limitations in terms of time and reproducibility, automatically estimating activity from segmentation, which involves integrating intensity values over the volume is also suboptimal, since PET is an inherently noisy modality. Although many semi-automatic segmentation based methods have been developed, in this paper, we introduce a method which completely eliminates the segmentation step and directly estimates the volume and activity of the lesions. We trained two parallel ensemble models using locally extracted 3D patches from phantom images to estimate the activity and volume, which are derivatives of other important quantification metrics such as standardized uptake value (SUV) and total lesion glycolysis (TLG). For validation, we used 54 clinical images from the QIN Head and Neck collection on The Cancer Imaging Archive, as well as a set of 55 PET scans of the Elliptical Lung-Spine Body Phantom™with different levels of noise, four different reconstruction methods, and three different background activities, namely; air, water, and hot background. In the validation on phantom images, we achieved relative absolute error (RAE) of 5.11 % ±3.5% and 5.7 % ±5.25% for volume and activity estimation, respectively, which represents improvements of over 20% and 6% respectively, compared with the best competing methods. From the validation performed using clinical images, we found that the proposed method is capable of obtaining almost the same level of agreement with a group of trained experts, as a single trained expert is, indicating that the method has the potential to be a useful tool in clinical practice.
View Article and Find Full Text PDF

Download full-text PDF

Source
http://dx.doi.org/10.1016/j.compmedimag.2017.12.004DOI Listing
January 2018

Segmentation and Measurement of Chronic Wounds for Bioprinting.

IEEE J Biomed Health Inform 2018 07 23;22(4):1269-1277. Epub 2017 Aug 23.

Objective: to provide a proof-of-concept tool for segmenting chronic wounds and transmitting the results as instructions and coordinates to a bioprinter robot and thus facilitate the treatment of chronic wounds.

Methods: several segmentation methods used for measuring wound geometry, including edge-detection and morphological operations, region-growing, Livewire, active contours, and texture segmentation, were compared on 26 images from 15 subjects. Ground-truth wound delineations were generated by a dermatologist. The wound coordinates were converted into G-code understandable by the bioprinting robot. Due to its desirable properties, alginate hydrogel was synthesized by dissolving 16% (w/v) sodium-alginate and 4% (w/v) gelatin in deionized water and used for cell encapsulation.

Results: Livewire achieved the best performance, with minimal user interaction: 97.08%, 99.68% 96.67%, 96.22, 98.15, and 32.26, mean values, respectively, for accuracy, sensitivity, specificity, Jaccard index, Dice similarity coefficient, and Hausdorff distance. The bioprinter robot was able to print skin cells on the surface of skin with a 95.56% similarity between the bioprinted patch's dimensions and the desired wound geometry.

Conclusion: we have designed a novel approach for the healing of chronic wounds, based on semiautomatic segmentation of wound images, improving clinicians' control of the bioprinting process through more accurate coordinates.

Significance: this study is the first to perform wound bioprinting based on image segmentation. It also compares several segmentation methods used for this purpose to determine the best.
View Article and Find Full Text PDF

Download full-text PDF

Source
http://dx.doi.org/10.1109/JBHI.2017.2743526DOI Listing
July 2018

Association of bladder dose with late urinary side effects in cervical cancer high-dose-rate brachytherapy.

Brachytherapy 2017 Nov - Dec;16(6):1175-1183. Epub 2017 Aug 17.

British Columbia Cancer Agency, Vancouver Centre, Vancouver, BC, Canada.

Purpose: The purpose of this work was to study the association between specific urinary sequelae and locally accumulated dose to the bladder wall and bladder neck in the treatment of cervical cancer with multifraction high-dose-rate (HDR) brachytherapy.

Methods And Materials: A cohort of 60 cervical cancer patients, treated with both external beam and five HDR brachytherapy insertions between 2008 and 2014 at the BC Cancer Agency, was identified. The accumulated dose received over five brachytherapy sessions was evaluated for the bladder wall and bladder neck of each patient using dosimetric parameters calculated from deformably registered image data sets. These parameters were examined as potential predictors of urinary sequelae including hematuria, frequency, urgency, incontinence, stream, nocturia, and dysuria. Two different dichotomization schemes were evaluated for grouping patients into Case and Control groups. The two-sample Student's t test was used for normally distributed samples and the Mann-Whitney nonparametric U test for non-normal distributions.

Results: A strong association between dose to the bladder neck and incontinence was found (p = 0.001). A statistically significant association (p < 0.05) was also observed between urgency and certain bladder-wall parameters.

Conclusions: Localized dose to the bladder neck is a potential predictor of urinary incontinence, whereas weaker associations were observed between urgency and some bladder-wall parameters.
View Article and Find Full Text PDF

Download full-text PDF

Source
http://dx.doi.org/10.1016/j.brachy.2017.07.001DOI Listing
June 2018

Modelling and extraction of pulsatile radial distension and compression motion for automatic vessel segmentation from video.

Med Image Anal 2017 Aug 29;40:184-198. Epub 2017 Jun 29.

Biomedical Signal and Image Computing Lab, University of British Columbia, Vancouver, BC, Canada.

Identification of vascular structures from medical images is integral to many clinical procedures. Most vessel segmentation techniques ignore the characteristic pulsatile motion of vessels in their formulation. In a recent effort to automatically segment vessels that are hidden under fat, we motivated the use of the magnitude of local pulsatile motion extracted from surgical endoscopic video. In this article we propose a new approach that leverages the local orientation, in addition to magnitude of motion, and demonstrate that the extended computation and utilization of motion vectors can improve the segmentation of vascular structures. We implement our approach using four alternatives to magnitude-only motion estimation by using traditional optical flow and by exploiting the monogenic signal for fast flow estimation. Our evaluations are conducted on both synthetic phantoms as well as two real ultrasound datasets showing improved segmentation results with negligible change in computational performance compared to the previous magnitude only approach.
View Article and Find Full Text PDF

Download full-text PDF

Source
http://dx.doi.org/10.1016/j.media.2017.06.009DOI Listing
August 2017

Pareto-optimal multi-objective dimensionality reduction deep auto-encoder for mammography classification.

Comput Methods Programs Biomed 2017 Jul 13;145:85-93. Epub 2017 Apr 13.

Medical Image Analysis Lab, Simon Fraser University, Canada. Electronic address:

Background And Objective: Feature reduction is an essential stage in computer aided breast cancer diagnosis systems. Multilayer neural networks can be trained to extract relevant features by encoding high-dimensional data into low-dimensional codes. Optimizing traditional auto-encoders works well only if the initial weights are close to a proper solution. They are also trained to only reduce the mean squared reconstruction error (MRE) between the encoder inputs and the decoder outputs, but do not address the classification error. The goal of the current work is to test the hypothesis that extending traditional auto-encoders (which only minimize reconstruction error) to multi-objective optimization for finding Pareto-optimal solutions provides more discriminative features that will improve classification performance when compared to single-objective and other multi-objective approaches (i.e. scalarized and sequential).

Methods: In this paper, we introduce a novel multi-objective optimization of deep auto-encoder networks, in which the auto-encoder optimizes two objectives: MRE and mean classification error (MCE) for Pareto-optimal solutions, rather than just MRE. These two objectives are optimized simultaneously by a non-dominated sorting genetic algorithm.

Results: We tested our method on 949 X-ray mammograms categorized into 12 classes. The results show that the features identified by the proposed algorithm allow a classification accuracy of up to 98.45%, demonstrating favourable accuracy over the results of state-of-the-art methods reported in the literature.

Conclusions: We conclude that adding the classification objective to the traditional auto-encoder objective and optimizing for finding Pareto-optimal solutions, using evolutionary multi-objective optimization, results in producing more discriminative features.
View Article and Find Full Text PDF

Download full-text PDF

Source
http://dx.doi.org/10.1016/j.cmpb.2017.04.012DOI Listing
July 2017

A structured latent model for ovarian carcinoma subtyping from histopathology slides.

Med Image Anal 2017 Jul 9;39:194-205. Epub 2017 May 9.

Department of Computing Science, Medical Image Analysis Lab, Simon Fraser University, Burnaby, Canada.

Accurate subtyping of ovarian carcinomas is an increasingly critical and often challenging diagnostic process. This work focuses on the development of an automatic classification model for ovarian carcinoma subtyping. Specifically, we present a novel clinically inspired contextual model for histopathology image subtyping of ovarian carcinomas. A whole slide image is modelled using a collection of tissue patches extracted at multiple magnifications. An efficient and effective feature learning strategy is used for feature representation of a tissue patch. The locations of salient, discriminative tissue regions are treated as latent variables allowing the model to explicitly ignore portions of the large tissue section that are unimportant for classification. These latent variables are considered in a structured formulation to model the contextual information represented from the multi-magnification analysis of tissues. A novel, structured latent support vector machine formulation is defined and used to combine information from multiple magnifications while simultaneously operating within the latent variable framework. The structural and contextual nature of our method addresses the challenges of intra-class variation and pathologists' workload, which are prevalent in histopathology image classification. Extensive experiments on a dataset of 133 patients demonstrate the efficacy and accuracy of the proposed method against state-of-the-art approaches for histopathology image classification. We achieve an average multi-class classification accuracy of 90%, outperforming existing works while obtaining substantial agreement with six clinicians tested on the same dataset.
View Article and Find Full Text PDF

Download full-text PDF

Source
http://dx.doi.org/10.1016/j.media.2017.04.008DOI Listing
July 2017

Multi-site quality and variability analysis of 3D FDG PET segmentations based on phantom and clinical image data.

Med Phys 2017 Feb;44(2):479-496

Department of Radiation Oncology, The University of Iowa, Iowa City, IA, USA.

Purpose: Radiomics utilizes a large number of image-derived features for quantifying tumor characteristics that can in turn be correlated with response and prognosis. Unfortunately, extraction and analysis of such image-based features is subject to measurement variability and bias. The challenge for radiomics is particularly acute in Positron Emission Tomography (PET) where limited resolution, a high noise component related to the limited stochastic nature of the raw data, and the wide variety of reconstruction options confound quantitative feature metrics. Extracted feature quality is also affected by tumor segmentation methods used to define regions over which to calculate features, making it challenging to produce consistent radiomics analysis results across multiple institutions that use different segmentation algorithms in their PET image analysis. Understanding each element contributing to these inconsistencies in quantitative image feature and metric generation is paramount for ultimate utilization of these methods in multi-institutional trials and clinical oncology decision making.

Methods: To assess segmentation quality and consistency at the multi-institutional level, we conducted a study of seven institutional members of the National Cancer Institute Quantitative Imaging Network. For the study, members were asked to segment a common set of phantom PET scans acquired over a range of imaging conditions as well as a second set of head and neck cancer (HNC) PET scans. Segmentations were generated at each institution using their preferred approach. In addition, participants were asked to repeat segmentations with a time interval between initial and repeat segmentation. This procedure resulted in overall 806 phantom insert and 641 lesion segmentations. Subsequently, the volume was computed from the segmentations and compared to the corresponding reference volume by means of statistical analysis.

Results: On the two test sets (phantom and HNC PET scans), the performance of the seven segmentation approaches was as follows. On the phantom test set, the mean relative volume errors ranged from 29.9 to 87.8% of the ground truth reference volumes, and the repeat difference for each institution ranged between -36.4 to 39.9%. On the HNC test set, the mean relative volume error ranged between -50.5 to 701.5%, and the repeat difference for each institution ranged between -37.7 to 31.5%. In addition, performance measures per phantom insert/lesion size categories are given in the paper. On phantom data, regression analysis resulted in coefficient of variation (CV) components of 42.5% for scanners, 26.8% for institutional approaches, 21.1% for repeated segmentations, 14.3% for relative contrasts, 5.3% for count statistics (acquisition times), and 0.0% for repeated scans. Analysis showed that the CV components for approaches and repeated segmentations were significantly larger on the HNC test set with increases by 112.7% and 102.4%, respectively.

Conclusion: Analysis results underline the importance of PET scanner reconstruction harmonization and imaging protocol standardization for quantification of lesion volumes. In addition, to enable a distributed multi-site analysis of FDG PET images, harmonization of analysis approaches and operator training in combination with highly automated segmentation methods seems to be advisable. Future work will focus on quantifying the impact of segmentation variation on radiomics system performance.
View Article and Find Full Text PDF

Download full-text PDF

Source
http://dx.doi.org/10.1002/mp.12041DOI Listing
http://www.ncbi.nlm.nih.gov/pmc/articles/PMC5834232PMC
February 2017