Publications by authors named "Riddhish Bhalodia"

10 Publications

  • Page 1 of 1

Leveraging unsupervised image registration for discovery of landmark shape descriptor.

Med Image Anal 2021 Jul 9;73:102157. Epub 2021 Jul 9.

Scientific Computing and Imaging Institute, 72 Central Campus Dr, University of Utah, Salt Lake City, Utah-84112, USA; School of Computing, 50 Central Campus Dr, University of Utah, Salt Lake City, Utah-84112, USA.

In current biological and medical research, statistical shape modeling (SSM) provides an essential framework for the characterization of anatomy/morphology. Such analysis is often driven by the identification of a relatively small number of geometrically consistent features found across the samples of a population. These features can subsequently provide information about the population shape variation. Dense correspondence models can provide ease of computation and yield an interpretable low-dimensional shape descriptor when followed by dimensionality reduction. However, automatic methods for obtaining such correspondences usually require image segmentation followed by significant preprocessing, which is taxing in terms of both computation as well as human resources. In many cases, the segmentation and subsequent processing require manual guidance and anatomy specific domain expertise. This paper proposes a self-supervised deep learning approach for discovering landmarks from images that can directly be used as a shape descriptor for subsequent analysis. We use landmark-driven image registration as the primary task to force the neural network to discover landmarks that register the images well. We also propose a regularization term that allows for robust optimization of the neural network and ensures that the landmarks uniformly span the image domain. The proposed method circumvents segmentation and preprocessing and directly produces a usable shape descriptor using just 2D or 3D images. In addition, we also propose two variants on the training loss function that allows for prior shape information to be integrated into the model. We apply this framework on several 2D and 3D datasets to obtain their shape descriptors. We analyze these shape descriptors in their efficacy of capturing shape information by performing different shape-driven applications depending on the data ranging from shape clustering to severity prediction to outcome diagnosis.
View Article and Find Full Text PDF

Download full-text PDF

Source
http://dx.doi.org/10.1016/j.media.2021.102157DOI Listing
July 2021

Uncertain-DeepSSM: From Images to Probabilistic Shape Models.

Shape Med Imaging (2020) 2020 Oct 3;12474:57-72. Epub 2020 Oct 3.

Scientific Computing and Imaging Institute, University of Utah, UT, USA.

Statistical shape modeling (SSM) has recently taken advantage of advances in deep learning to alleviate the need for a time-consuming and expert-driven workflow of anatomy segmentation, shape registration, and the optimization of population-level shape representations. DeepSSM is an end-to-end deep learning approach that extracts statistical shape representation directly from unsegmented images with little manual overhead. It performs comparably with state-of-the-art shape modeling methods for estimating morphologies that are viable for subsequent downstream tasks. Nonetheless, DeepSSM produces an overconfident estimate of shape that cannot be blindly assumed to be accurate. Hence, conveying what DeepSSM does not know, via quantifying granular estimates of uncertainty, is critical for its direct clinical application as an on-demand diagnostic tool to determine how trustworthy the model output is. Here, we propose Uncertain-DeepSSM as a unified model that quantifies both, data-dependent aleatoric uncertainty by adapting the network to predict intrinsic input variance, and model-dependent epistemic uncertainty via a Monte Carlo dropout sampling to approximate a variational distribution over the network parameters. Experiments show an accuracy improvement over DeepSSM while maintaining the same benefits of being end-to-end with little pre-processing.
View Article and Find Full Text PDF

Download full-text PDF

Source
http://dx.doi.org/10.1007/978-3-030-61056-2_5DOI Listing
http://www.ncbi.nlm.nih.gov/pmc/articles/PMC8011333PMC
October 2020

Self-Supervised Discovery of Anatomical Shape Landmarks.

Med Image Comput Comput Assist Interv 2020 Oct 29;12264:627-638. Epub 2020 Sep 29.

Scientific Computing and Imaging Institute, University of Utah.

Statistical shape analysis is a very useful tool in a wide range of medical and biological applications. However, it typically relies on the ability to produce a relatively small number of features that can capture the relevant variability in a population. State-of-the-art methods for obtaining such anatomical features rely on either extensive preprocessing or segmentation and/or significant tuning and post-processing. These shortcomings limit the widespread use of shape statistics. We propose that effective shape representations should provide sufficient information to align/register images. Using this assumption we propose a self-supervised, neural network approach for automatically positioning and detecting landmarks in images that can be used for subsequent analysis. The network discovers the landmarks corresponding to anatomical shape features that promote good image registration in the context of a particular class of transformations. In addition, we also propose a regularization for the proposed network which allows for a uniform distribution of these discovered landmarks. In this paper, we present a complete framework, which only takes a set of input images and produces landmarks that are immediately usable for statistical shape analysis. We evaluate the performance on a phantom dataset as well as 2D and 3D images.
View Article and Find Full Text PDF

Download full-text PDF

Source
http://dx.doi.org/10.1007/978-3-030-59719-1_61DOI Listing
http://www.ncbi.nlm.nih.gov/pmc/articles/PMC7993653PMC
October 2020

VAEs: Fixing Sample Generation for Regularized VAEs.

Comput Vis ACCV 2020 Nov-Dec;12625:643-660. Epub 2021 Feb 25.

Scientific Computing and Imaging Institute, School of Computing, University of Utah, Salt Lake City, UT, USA.

Unsupervised representation learning via generative modeling is a staple to many computer vision applications in the absence of labeled data. Variational Autoencoders (VAEs) are powerful generative models that learn representations useful for data generation. However, due to inherent challenges in the training objective, VAEs fail to learn useful representations amenable for downstream tasks. Regularization-based methods that attempt to improve the representation learning aspect of VAEs come at a price: poor sample generation. In this paper, we explore this representation-generation trade-off for regularized VAEs and introduce a new family of priors, namely decoupled priors, or dpVAEs, that decouple the representation space from the generation space. This decoupling enables the use of VAE regularizers on the representation space without impacting the distribution used for sample generation, and thereby reaping the representation learning benefits of the regularizations without sacrificing the sample generation. dpVAE leverages invertible networks to learn a bijective mapping from an arbitrarily complex representation distribution to a simple, tractable, generative distribution. Decoupled priors can be adapted to the state-of-the-art VAE regularizers without additional hyperparameter tuning. We showcase the use of dpVAEs with different regularizers. Experiments on MNIST, SVHN, and CelebA demonstrate, quantitatively and qualitatively, that dpVAE fixes sample generation for regularized VAEs.
View Article and Find Full Text PDF

Download full-text PDF

Source
http://dx.doi.org/10.1007/978-3-030-69538-5_39DOI Listing
http://www.ncbi.nlm.nih.gov/pmc/articles/PMC7993751PMC
February 2021

A Cooperative Autoencoder for Population-Based Regularization of CNN Image Registration.

Med Image Comput Comput Assist Interv 2019 Oct 10;11765:391-400. Epub 2019 Oct 10.

Scientific Computing and Imaging Institute, University of Utah.

Spatial transformations are enablers in a variety of medical image analysis applications that entail aligning images to a common coordinate systems. Population analysis of such transformations is expected to capture the underlying image and shape variations, and hence these transformations are required to produce correspondences. This is usually enforced through some smoothness-based generic metric or regularization of the deformation field. Alternatively, population-based regularization has been shown to produce anatomically accurate correspondences in cases where anatomically unaware (i.e., data independent) regularization fail. Recently, deep networks have been used to generate spatial transformations in an unsupervised manner, and, once trained, these networks are computationally faster and as accurate as conventional, optimization-based registration methods. However, the deformation fields produced by these networks require smoothness penalties, just as the conventional registration methods, and ignores population-level statistics of the transformations. Here, we propose a novel neural network architecture that simultaneously learns and uses the population-level statistics of the spatial transformations to regularize the neural networks for unsupervised image registration. This regularization is in the form of a bottleneck autoencoder, which learns and adapts to the population of transformations required to align input images by encoding the transformations to a low dimensional manifold. The proposed architecture produces deformation fields that describe the population-level features and associated correspondences in an anatomically relevant manner and are statistically compact relative to the state-of-the-art approaches while maintaining computational efficiency. We demonstrate the efficacy of the proposed architecture on synthetic data sets, as well as 2D and 3D medical data.
View Article and Find Full Text PDF

Download full-text PDF

Source
http://dx.doi.org/10.1007/978-3-030-32245-8_44DOI Listing
http://www.ncbi.nlm.nih.gov/pmc/articles/PMC7425577PMC
October 2019

Interatrial Septum and Appendage Ostium in Atrial Fibrillation Patients: A Population Study.

Comput Cardiol (2010) 2019 Sep 24;46. Epub 2020 Feb 24.

Scientific Computing and Imaging Institute, University of Utah, Salt Lake City, UT, USA.

Left atrial appendage (LAA) closure is performed in atrial fibrillation (AF) patients to help prevent stroke. LAA closure using an occlusion implant is performed under imaging guidance. However, occlusion can be a complicated process due to the highly variable and heterogeneous LAA shapes across patients. Patient-specific implant selection and insertion processes are keys to the success of the procedure, yet subjective in nature. A population study of the angle of entry at the interatrial septum relative to the appendage can assist in both catheter design and patient-specific implant choice. In our population study, we analyzed the inherent clusters of the angles that were obtained between the septum normal and the LAA ostium plane. The number of inherent angle clusters matched the LAA four morphological classifications reported in the literature. Further, our exploratory analysis revealed that the normal from the ostium plane does not intersect the septum in all the samples under study. The insights gained from this study can help assist in making objective decisions during LAA closure.
View Article and Find Full Text PDF

Download full-text PDF

Source
http://dx.doi.org/10.22489/cinc.2019.439DOI Listing
http://www.ncbi.nlm.nih.gov/pmc/articles/PMC7338039PMC
September 2019

Efficient Segmentation Pipeline Using Diffeomorphic Image Registration: A Validation Study.

Comput Cardiol (2010) 2019 Sep 24;46. Epub 2020 Feb 24.

Scientific Computing and Imaging Institute, University of Utah, Salt Lake City, Utah, USA.

Functional measurements of the left atrium (LA) in atrial fibrillation (AF) patients is limited to a single CINE slice midway through the LA. Nonetheless, a full 3D characterization of atrial functional measurements would provide more insights into LA function. But this improved modeling capacity comes at a price of requiring LA segmentation of each 3D time point,a time-consuming and expensive task that requires anatomy-specific expertise.We propose an efficient pipeline which requires ground truth segmentation of a single (or limited) CINE time point to accurately propagate it throughout the sequence. This method significantly saves human effort and enable better characterization of LA anatomy. From a gated cardiac CINE MRI sequence we select a single CINE time point with ground truth segmentation, and assuming cyclic motion, we register other images corresponding to all time points using diffeomorphic registration in ANTs. The diffeomorphic registration fields allow us to map a given anatomical shape (segmentation) to each CINE time point, facilitating the construction of a 4D shape model.
View Article and Find Full Text PDF

Download full-text PDF

Source
http://dx.doi.org/10.22489/cinc.2019.364DOI Listing
http://www.ncbi.nlm.nih.gov/pmc/articles/PMC7338038PMC
September 2019

Quantifying the Severity of Metopic Craniosynostosis: A Pilot Study Application of Machine Learning in Craniofacial Surgery.

J Craniofac Surg 2020 May/Jun;31(3):697-701

Department of Plastic Surgery, UPMC Children's Hospital, University of Pittsburgh Medical Center, Pittsburgh, PA.

The standard for diagnosing metopic craniosynostosis (CS) utilizes computed tomography (CT) imaging and physical exam, but there is no standardized method for determining disease severity. Previous studies using interfrontal angles have evaluated differences in specific skull landmarks; however, these measurements are difficult to readily ascertain in clinical practice and fail to assess the complete skull contour. This pilot project employs machine learning algorithms to combine statistical shape information with expert ratings to generate a novel objective method of measuring the severity of metopic CS.Expert ratings of normal and metopic skull CT images were collected. Skull-shape analysis was conducted using ShapeWorks software. Machine-learning was used to combine the expert ratings with our shape analysis model to predict the severity of metopic CS using CT images. Our model was then compared to the gold standard using interfrontal angles.Seventeen metopic skull CT images of patients 5 to 15 months old were assigned a severity by 18 craniofacial surgeons, and 65 nonaffected controls were included with a 0 severity. Our model accurately correlated the level of skull deformity with severity (P < 0.10) and predicted the severity of metopic CS more often than models using interfrontal angles (χ = 5.46, P = 0.019).This is the first study that combines shape information with expert ratings to generate an objective measure of severity for metopic CS. This method may help clinicians easily quantify the severity and perform robust longitudinal assessments of the condition.
View Article and Find Full Text PDF

Download full-text PDF

Source
http://dx.doi.org/10.1097/SCS.0000000000006215DOI Listing
http://www.ncbi.nlm.nih.gov/pmc/articles/PMC7202995PMC
September 2020

DeepSSM: A Deep Learning Framework for Statistical Shape Modeling from Raw Images.

Shape Med Imaging (2018) 2018 Sep 23;11167:244-257. Epub 2018 Nov 23.

Scientific Computing and Imaging Institute, University of Utah.

Statistical shape modeling is an important tool to characterize variation in anatomical morphology. Typical shapes of interest are measured using 3D imaging and a subsequent pipeline of registration, segmentation, and some extraction of shape features or projections onto some lower-dimensional shape space, which facilitates subsequent statistical analysis. Many methods for constructing compact shape representations have been proposed, but are often impractical due to the sequence of image preprocessing operations, which involve significant parameter tuning, manual delineation, and/or quality control by the users. We propose DeepSSM: a deep learning approach to extract a low-dimensional shape representation directly from 3D images, requiring virtually no parameter tuning or user assistance. DeepSSM uses a convolutional neural network (CNN) that simultaneously localizes the biological structure of interest, establishes correspondences, and projects these points onto a low-dimensional shape representation in the form of PCA loadings within a point distribution model. To overcome the challenge of the limited availability of training images with dense correspondences, we present a novel data augmentation procedure that uses existing correspondences on a relatively small set of processed images with shape statistics to create plausible training samples with known shape parameters. In this way, we leverage the limited CT/MRI scans (40-50) into thousands of images needed to train a deep neural net. After the training, the CNN automatically produces accurate low-dimensional shape representations for unseen images. We validate DeepSSM for three different applications pertaining to modeling pediatric cranial CT for characterization of metopic craniosynostosis, femur CT scans identifying morphologic deformities of the hip due to femoroacetabular impingement, and left atrium MRI scans for atrial fibrillation recurrence prediction.
View Article and Find Full Text PDF

Download full-text PDF

Source
http://dx.doi.org/10.1007/978-3-030-04747-4_23DOI Listing
http://www.ncbi.nlm.nih.gov/pmc/articles/PMC6385885PMC
September 2018

Does Alignment in Statistical Shape Modeling of Left Atrium Appendage Impact Stroke Prediction?

Comput Cardiol (2010) 2019 24;46. Epub 2020 Feb 24.

Scientific Computing and Imaging Institute, University of Utah, Salt Lake City, Utah, USA.

Evidence suggests that the shape of left atrium appendages (LAA) is a primary indicator in predicting stroke for patients diagnosed with atrial fibrillation (AF). Statistical shape modeling tools used to represent (i.e., parameterize) the underlying LAA variability are of crucial importance to learn shape-based predictors of stroke. Most shape modeling techniques use some form of alignment either as a data pre-processing step or during the modeling step. However, the LAA is a joint anatomy along with left atrium (LA), and the relative position and alignment plays a crucial part in determining risk of stroke. In this paper, we explore different alignment strategies for statistical shape modeling and how each strategy affects the stroke prediction capability. This allows for identifying a unified approach of alignment while analyzing the LAA anatomy for stroke. Here, we study three different alignment strategies, (i) global alignment, (ii) global translational alignment and (iii) cluster based alignment. Our results show that alignment strategies that take into account LAA orientation, i.e., (ii), or the inherent natural clustering of the population under study, i.e., (iii), provide significant improvement over global alignment in both qualitative as well as quantitative measures.
View Article and Find Full Text PDF

Download full-text PDF

Source
http://dx.doi.org/10.22489/cinc.2019.200DOI Listing
http://www.ncbi.nlm.nih.gov/pmc/articles/PMC7338006PMC
February 2020
-->