Publications by authors named "Jeffrey A Fessler"

141 Publications

Joint Design of RF and gradient waveforms via auto-differentiation for 3D tailored excitation in MRI.

IEEE Trans Med Imaging 2021 May 24;PP. Epub 2021 May 24.

This paper proposes a new method for joint design of radiofrequency (RF) and gradient waveforms in Magnetic Resonance Imaging (MRI), and applies it to the design of 3D spatially tailored saturation and inversion pulses. The joint design of both waveforms is characterized by the ODE Bloch equations, to which there is no known direct solution. Existing approaches therefore typically rely on simplified problem formulations based on, e.g., the small-tip approximation or constraining the gradient waveforms to particular shapes, and often apply only to specific objective functions for a narrow set of design goals (e.g., ignoring hardware constraints). This paper develops and exploits an auto-differentiable Bloch simulator to directly compute Jacobians of the (Bloch-simulated) excitation pattern with respect to RF and gradient waveforms. This approach is compatible with arbitrary sub-differentiable loss functions, and optimizes the RF and gradients directly without restricting the waveform shapes. For computational efficiency, we derive and implement explicit Bloch simulator Jacobians (approximately halving computation time and memory usage). To enforce hardware limits (peak RF, gradient, and slew rate), we use a change of variables that makes the 3D pulse design problem effectively unconstrained; we then optimize the resulting problem directly using the proposed auto-differentiation framework. We demonstrate our approach with two kinds of 3D excitation pulses that cannot be easily designed with conventional approaches: Outer-volume saturation (90° flip angle), and inner-volume inversion.
View Article and Find Full Text PDF

Download full-text PDF

Source
http://dx.doi.org/10.1109/TMI.2021.3083104DOI Listing
May 2021

Neural network based 3D tracking with a graphene transparent focal stack imaging system.

Nat Commun 2021 04 23;12(1):2413. Epub 2021 Apr 23.

Department of Electrical Engineering and Computer Science, University of Michigan, Ann Arbor, MI, USA.

Recent years have seen the rapid growth of new approaches to optical imaging, with an emphasis on extracting three-dimensional (3D) information from what is normally a two-dimensional (2D) image capture. Perhaps most importantly, the rise of computational imaging enables both new physical layouts of optical components and new algorithms to be implemented. This paper concerns the convergence of two advances: the development of a transparent focal stack imaging system using graphene photodetector arrays, and the rapid expansion of the capabilities of machine learning including the development of powerful neural networks. This paper demonstrates 3D tracking of point-like objects with multilayer feedforward neural networks and the extension to tracking positions of multi-point objects. Computer simulations further demonstrate how this optical system can track extended objects in 3D, highlighting the promise of combining nanophotonic devices, new optical system designs, and machine learning for new frontiers in 3D imaging.
View Article and Find Full Text PDF

Download full-text PDF

Source
http://dx.doi.org/10.1038/s41467-021-22696-xDOI Listing
http://www.ncbi.nlm.nih.gov/pmc/articles/PMC8065157PMC
April 2021

Deep Convolutional Neural Network with Adversarial Training for Denoising Digital Breast Tomosynthesis Images.

IEEE Trans Med Imaging 2021 Mar 17;PP. Epub 2021 Mar 17.

Digital breast tomosynthesis (DBT) is a quasi-three-dimensional imaging modality that can reduce false negatives and false positives in mass lesion detection caused by overlapping breast tissue in conventional two-dimensional (2D) mammography. The patient dose of a DBT scan is similar to that of a single 2D mammogram, while acquisition of each projection view adds detector readout noise. The noise is propagated to the reconstructed DBT volume, possibly obscuring subtle signs of breast cancer such as microcalcifications (MCs). This study developed a deep convolutional neural network (DCNN) framework for denoising DBT images with a focus on improving the conspicuity of MCs as well as preserving the ill-defined margins of spiculated masses and normal tissue textures. We trained the DCNN using a weighted combination of mean squared error (MSE) loss and adversarial loss. We configured a dedicated x-ray imaging simulator in combination with digital breast phantoms to generate realistic in silico DBT data for training. We compared the DCNN training between using digital phantoms and using real physical phantoms. The proposed denoising method improved the contrast-to-noise ratio (CNR) and detectability index (d') of the simulated MCs in the validation phantom DBTs. These performance measures improved with increasing training target dose and training sample size. Promising denoising results were observed on the transferability of the digital-phantom-trained denoiser to DBT reconstructed with different techniques and on a small independent test set of human subject DBT images.
View Article and Find Full Text PDF

Download full-text PDF

Source
http://dx.doi.org/10.1109/TMI.2021.3066896DOI Listing
March 2021

High-Resolution Oscillating Steady-State fMRI Using Patch-Tensor Low-Rank Reconstruction.

IEEE Trans Med Imaging 2020 12 30;39(12):4357-4368. Epub 2020 Nov 30.

The goals of fMRI acquisition include high spatial and temporal resolutions with a high signal to noise ratio (SNR). Oscillating Steady-State Imaging (OSSI) is a new fMRI acquisition method that provides large oscillating signals with the potential for high SNR, but does so at the expense of spatial and temporal resolutions. The unique oscillation pattern of OSSI images makes it well suited for high-dimensional modeling. We propose a patch-tensor low-rank model to exploit the local spatial-temporal low-rankness of OSSI images. We also develop a practical sparse sampling scheme with improved sampling incoherence for OSSI. With an alternating direction method of multipliers (ADMM) based algorithm, we improve OSSI spatial and temporal resolutions with a factor of 12 acquisition acceleration and 1.3 mm isotropic spatial resolution in prospectively undersampled experiments. The proposed model yields high temporal SNR with more activation than other low-rank methods. Compared to the standard grad- ient echo (GRE) imaging with the same spatial-temporal resolution, 3D OSSI tensor model reconstruction demonstrates 2 times higher temporal SNR with 2 times more functional activation.
View Article and Find Full Text PDF

Download full-text PDF

Source
http://dx.doi.org/10.1109/TMI.2020.3017450DOI Listing
http://www.ncbi.nlm.nih.gov/pmc/articles/PMC7751316PMC
December 2020

Improved Low-Count Quantitative PET Reconstruction With an Iterative Neural Network.

IEEE Trans Med Imaging 2020 11 28;39(11):3512-3522. Epub 2020 Oct 28.

Image reconstruction in low-count PET is particularly challenging because gammas from natural radioactivity in Lu-based crystals cause high random fractions that lower the measurement signal-to-noise-ratio (SNR). In model-based image reconstruction (MBIR), using more iterations of an unregularized method may increase the noise, so incorporating regularization into the image reconstruction is desirable to control the noise. New regularization methods based on learned convolutional operators are emerging in MBIR. We modify the architecture of an iterative neural network, BCD-Net, for PET MBIR, and demonstrate the efficacy of the trained BCD-Net using XCAT phantom data that simulates the low true coincidence count-rates with high random fractions typical for Y-90 PET patient imaging after Y-90 microsphere radioembolization. Numerical results show that the proposed BCD-Net significantly improves CNR and RMSE of the reconstructed images compared to MBIR methods using non-trained regularizers, total variation (TV) and non-local means (NLM). Moreover, BCD-Net successfully generalizes to test data that differs from the training data. Improvements were also demonstrated for the clinically relevant phantom measurement data where we used training and testing datasets having very different activity distributions and count-levels.
View Article and Find Full Text PDF

Download full-text PDF

Source
http://dx.doi.org/10.1109/TMI.2020.2998480DOI Listing
http://www.ncbi.nlm.nih.gov/pmc/articles/PMC7685233PMC
November 2020

A deep neural network for fast and accurate scatter estimation in quantitative SPECT/CT under challenging scatter conditions.

Eur J Nucl Med Mol Imaging 2020 12 15;47(13):2956-2967. Epub 2020 May 15.

Department of Radiology, University of Michigan, 1301 Catherine, 2276 Medical Science I/5610, Ann Arbor, MI, 48109, USA.

Purpose: A major challenge for accurate quantitative SPECT imaging of some radionuclides is the inadequacy of simple energy window-based scatter estimation methods, widely available on clinic systems. A deep learning approach for SPECT/CT scatter estimation is investigated as an alternative to computationally expensive Monte Carlo (MC) methods for challenging SPECT radionuclides, such as Y.

Methods: A deep convolutional neural network (DCNN) was trained to separately estimate each scatter projection from the measured Y bremsstrahlung SPECT emission projection and CT attenuation projection that form the network inputs. The 13-layer deep architecture consisted of separate paths for the emission and attenuation projection that are concatenated before the final convolution steps. The training label consisted of MC-generated "true" scatter projections in phantoms (MC is needed only for training) with the mean square difference relative to the model output serving as the loss function. The test data set included a simulated sphere phantom with a lung insert, measurements of a liver phantom, and patients after Y radioembolization. OS-EM SPECT reconstruction without scatter correction (NO-SC), with the true scatter (TRUE-SC) (available for simulated data only), with the DCNN estimated scatter (DCNN-SC), and with a previously developed MC scatter model (MC-SC) were compared, including with Y PET when available.

Results: The contrast recovery (CR) vs. noise and lung insert residual error vs. noise curves for images reconstructed with DCNN-SC and MC-SC estimates were similar. At the same noise level of 10% (across multiple realizations), the average sphere CR was 24%, 52%, 55%, and 67% for NO-SC, MC-SC, DCNN-SC, and TRUE-SC, respectively. For the liver phantom, the average CR for liver inserts were 32%, 73%, and 65% for NO-SC, MC-SC, and DCNN-SC, respectively while the corresponding values for average contrast-to-noise ratio (visibility index) in low-concentration extra-hepatic inserts were 2, 19, and 61, respectively. In patients, there was high concordance between lesion-to-liver uptake ratios for SPECT reconstruction with DCNN-SC (median 4.8, range 0.02-13.8) compared with MC-SC (median 4.0, range 0.13-12.1; CCC = 0.98) and with Y PET (median 4.9, range 0.02-11.2; CCC = 0.96) while the concordance with NO-SC was poor (median 2.8, range 0.3-7.2; CCC = 0.59). The trained DCNN took ~ 40 s (using a single i5 processor on a desktop computer) to generate the scatter estimates for all 128 views in a patient scan, compared to ~ 80 min for the MC scatter model using 12 processors.

Conclusions: For diverse Y test data that included patient studies, we demonstrated comparable performance between images reconstructed with deep learning and MC-based scatter estimates using metrics relevant for dosimetry and for safety. This approach that can be generalized to other radionuclides by changing the training data is well suited for real-time clinical use because of the high speed, orders of magnitude faster than MC, while maintaining high accuracy.
View Article and Find Full Text PDF

Download full-text PDF

Source
http://dx.doi.org/10.1007/s00259-020-04840-9DOI Listing
http://www.ncbi.nlm.nih.gov/pmc/articles/PMC7666660PMC
December 2020

Optimization Methods for Magnetic Resonance Image Reconstruction: Key Models and Optimization Algorithms.

IEEE Signal Process Mag 2020 Jan 17;37(1):33-40. Epub 2020 Jan 17.

EECS Department, Univ. of Michigan.

The development of compressed sensing methods for magnetic resonance (MR) image reconstruction led to an explosion of research on models and optimization algorithms for MR imaging (MRI). Roughly 10 years after such methods first appeared in the MRI literature, the U.S. Food and Drug Administration (FDA) approved certain compressed sensing methods for commercial use, making compressed sensing a clinical success story for MRI. This review paper summarizes several key models and optimization algorithms for MR image reconstruction, including both the type of methods that have FDA approval for clinical use, as well as more recent methods being considered in the research community that use data-adaptive regularizers. Many algorithms have been devised that exploit the structure of the system model and regularizers used in MRI; this paper strives to collect such algorithms in a single survey.
View Article and Find Full Text PDF

Download full-text PDF

Source
http://dx.doi.org/10.1109/MSP.2019.2943645DOI Listing
http://www.ncbi.nlm.nih.gov/pmc/articles/PMC7172344PMC
January 2020

Convolutional Analysis Operator Learning: Dependence on Training Data.

IEEE Signal Process Lett 2019 Aug 7;26(8):1137-1141. Epub 2019 Jun 7.

Department of Electrical Engineering and Computer Science, The University of Michigan, Ann Arbor, MI 48019 USA.

Convolutional analysis operator learning (CAOL) enables the unsupervised training of (hierarchical) convolutional sparsifying operators or autoencoders from large datasets. One can use many training images for CAOL, but a precise understanding of the impact of doing so has remained an open question. This paper presents a series of results that lend insight into the impact of dataset size on the filter update in CAOL. The first result is a general deterministic bound on errors in the estimated filters, and is followed by a bound on the errors as the number of training samples increases. The second result provides a analogue. The bounds depend on properties of the training data, and we investigate their empirical values with real data. Taken together, these results provide evidence for the potential benefit of using more training data in CAOL.
View Article and Find Full Text PDF

Download full-text PDF

Source
http://dx.doi.org/10.1109/lsp.2019.2921446DOI Listing
http://www.ncbi.nlm.nih.gov/pmc/articles/PMC7170269PMC
August 2019

Myelin water fraction estimation using small-tip fast recovery MRI.

Magn Reson Med 2020 10 12;84(4):1977-1990. Epub 2020 Apr 12.

Department of Electrical Engineering and Computer Science, University of Michigan, Ann Arbor, MI, USA.

Purpose: To demonstrate the feasibility of an optimized set of small-tip fast recovery (STFR) MRI scans for rapidly estimating myelin water fraction (MWF) in the brain.

Methods: We optimized a set of STFR scans to minimize the Cramér-Rao Lower Bound of MWF estimates. We evaluated the RMSE of MWF estimates from the optimized scans in simulation. We compared STFR-based MWF estimates (both modeling exchange and not modeling exchange) to multi-echo spin echo (MESE)-based estimates. We used the optimized scans to acquire in vivo data from which a MWF map was estimated. We computed the STFR-based MWF estimates using PERK, a recently developed kernel regression technique, and the MESE-based MWF estimates using both regularized non-negative least squares (NNLS) and PERK.

Results: In simulation, the optimized STFR scans led to estimates of MWF with low RMSE across a range of tissue parameters and across white matter and gray matter. The STFR-based MWF estimates that modeled exchange compared well to MESE-based MWF estimates in simulation. When the optimized scans were tested in vivo, the MWF map that was estimated using a 3-compartment model with exchange was closer to the MESE-based MWF map.

Conclusions: The optimized STFR scans appear to be well suited for estimating MWF in simulation and in vivo when we model exchange in training. In this case, the STFR-based MWF estimates are close to the MESE-based estimates.
View Article and Find Full Text PDF

Download full-text PDF

Source
http://dx.doi.org/10.1002/mrm.28259DOI Listing
http://www.ncbi.nlm.nih.gov/pmc/articles/PMC7478173PMC
October 2020

Online Adaptive Image Reconstruction (OnAIR) Using Dictionary Models.

IEEE Trans Comput Imaging 2020 ;6:153-166

Department of Electrical Engineering and Computer Science, University of Michigan, Ann Arbor, MI, 48109 USA.

Sparsity and low-rank models have been popular for reconstructing images and videos from limited or corrupted measurements. Dictionary or transform learning methods are useful in applications such as denoising, inpainting, and medical image reconstruction. This paper proposes a framework for online (or time-sequential) adaptive reconstruction of dynamic image sequences from linear (typically undersampled) measurements. We model the spatiotemporal patches of the underlying dynamic image sequence as sparse in a dictionary, and we simultaneously estimate the dictionary and the images sequentially from streaming measurements. Multiple constraints on the adapted dictionary are also considered such as a unitary matrix, or low-rank dictionary atoms that provide additional efficiency or robustness. The proposed online algorithms are memory efficient and involve simple updates of the dictionary atoms, sparse coefficients, and images. Numerical experiments demonstrate the usefulness of the proposed methods in inverse problems such as video reconstruction or inpainting from noisy, subsampled pixels, and dynamic magnetic resonance image reconstruction from very limited measurements.
View Article and Find Full Text PDF

Download full-text PDF

Source
http://dx.doi.org/10.1109/tci.2019.2931092DOI Listing
http://www.ncbi.nlm.nih.gov/pmc/articles/PMC7039536PMC
January 2020

Image Reconstruction: From Sparsity to Data-adaptive Methods and Machine Learning.

Proc IEEE Inst Electr Electron Eng 2020 Jan 19;108(1):86-109. Epub 2019 Sep 19.

Department of Electrical Engineering and Computer Science, University of Michigan, Ann Arbor, MI, 48109 USA.

The field of medical image reconstruction has seen roughly four types of methods. The first type tended to be analytical methods, such as filtered back-projection (FBP) for X-ray computed tomography (CT) and the inverse Fourier transform for magnetic resonance imaging (MRI), based on simple mathematical models for the imaging systems. These methods are typically fast, but have suboptimal properties such as poor resolution-noise trade-off for CT. A second type is iterative reconstruction methods based on more complete models for the imaging system physics and, where appropriate, models for the sensor statistics. These iterative methods improved image quality by reducing noise and artifacts. The FDA-approved methods among these have been based on relatively simple regularization models. A third type of methods has been designed to accommodate modified data acquisition methods, such as reduced sampling in MRI and CT to reduce scan time or radiation dose. These methods typically involve mathematical image models involving assumptions such as or . A fourth type of methods replaces mathematically designed models of signals and systems with or models inspired by the field of . This paper focuses on the two most recent trends in medical image reconstruction: methods based on sparsity or low-rank models, and data-driven methods based on machine learning techniques.
View Article and Find Full Text PDF

Download full-text PDF

Source
http://dx.doi.org/10.1109/JPROC.2019.2936204DOI Listing
http://www.ncbi.nlm.nih.gov/pmc/articles/PMC7039447PMC
January 2020

Efficient Regularized Field Map Estimation in 3D MRI.

IEEE Trans Comput Imaging 2020 15;6:1451-1458. Epub 2020 Oct 15.

Department of Electrical Engineering and Computer Science, University of Michigan, Ann Arbor, MI 48109 USA.

Magnetic field inhomogeneity estimation is important in some types of magnetic resonance imaging (MRI), including field-corrected reconstruction for fast MRI with long readout times, and chemical shift based water-fat imaging. Regularized field map estimation methods that account for phase wrapping and noise involve nonconvex cost functions that require iterative algorithms. Most existing minimization techniques were computationally or memory intensive for 3D datasets, and are designed for single-coil MRI. This paper considers 3D MRI with optional consideration of coil sensitivity, and addresses the multi-echo field map estimation and water-fat imaging problem. Our efficient algorithm uses a preconditioned nonlinear conjugate gradient method based on an incomplete Cholesky factorization of the Hessian of the cost function, along with a monotonic line search. Numerical experiments show the computational advantage of the proposed algorithm over state-of-the-art methods with similar memory requirements.
View Article and Find Full Text PDF

Download full-text PDF

Source
http://dx.doi.org/10.1109/TCI.2020.3031082DOI Listing
http://www.ncbi.nlm.nih.gov/pmc/articles/PMC7943027PMC
October 2020

Optimizing MRF-ASL scan design for precise quantification of brain hemodynamics using neural network regression.

Magn Reson Med 2020 06 21;83(6):1979-1991. Epub 2019 Nov 21.

Functional MRI Laboratory, University of Michigan, Ann Arbor, Michigan, USA.

Purpose: Arterial Spin Labeling (ASL) is a quantitative, non-invasive alternative for perfusion imaging that does not use contrast agents. The magnetic resonance fingerprinting (MRF) framework can be adapted to ASL to estimate multiple physiological parameters simultaneously. In this work, we introduce an optimization scheme to increase the sensitivity of the ASL fingerprint. We also propose a regression based estimation framework for MRF-ASL.

Methods: To improve the sensitivity of MRF-ASL signals to underlying parameters, we optimized ASL labeling durations using the Cramer-Rao Lower Bound (CRLB). This paper also proposes a neural network regression based estimation framework trained using noisy synthetic signals generated from our ASL signal model. We tested our methods in silico and in vivo, and compared with multiple post labeling delay (multi-PLD) ASL and unoptimized MRF-ASL. We present comparisons of estimated maps for the six parameters of our signal model.

Results: The scan design process facilitated precise estimates of multiple hemodynamic parameters and tissue properties from a single scan, in regions of normal gray and white matter, as well as regions with anomalous perfusion activity in the brain. In particular, there was a 86.7% correlation of perfusion estimates with the ground truth in silico, using our proposed techniques. In vivo, there was roughly a 7 fold improvement in the Coefficient of Variation (CoV) for white matter perfusion, and 2 fold improvement in gray matter perfusion CoV in comparison to a reference Multi PLD method. The regression based estimation approach provided perfusion estimates rapidly, with estimation times of around 1s per map.

Conclusions: Scan design optimization, coupled with regression-based estimation is a powerful tool for improving precision in MRF-ASL.
View Article and Find Full Text PDF

Download full-text PDF

Source
http://dx.doi.org/10.1002/mrm.28051DOI Listing
June 2020

Efficient Dynamic Parallel MRI Reconstruction for the Low-Rank Plus Sparse Model.

IEEE Trans Comput Imaging 2019 Mar 19;5(1):17-26. Epub 2018 Nov 19.

J. A. Fessler is with the Department of Electrical Engineering and Computer Science, University of Michigan, Ann Arbor, MI 48109 USA

The low-rank plus sparse (L+S) decomposition model enables the reconstruction of under-sampled dynamic parallel magnetic resonance imaging (MRI) data. Solving for the low-rank and the sparse components involves non-smooth composite convex optimization, and algorithms for this problem can be categorized into proximal gradient methods and variable splitting methods. This paper investigates new efficient algorithms for both schemes. While current proximal gradient techniques for the L+S model involve the classical iterative soft thresholding algorithm (ISTA), this paper considers two accelerated alternatives, one based on the fast iterative shrinkage-thresholding algorithm (FISTA), and the other with the recent proximal optimized gradient method (POGM). In the augmented Lagrangian (AL) framework, we propose an efficient variable splitting scheme based on the form of the data acquisition operator, leading to simpler computation than the conjugate gradient (CG) approach required by existing AL methods. Numerical results suggest faster convergence of the efficient implementations for both frameworks, with POGM providing the fastest convergence overall and the practical benefit of being free of algorithm tuning parameters.
View Article and Find Full Text PDF

Download full-text PDF

Source
http://dx.doi.org/10.1109/TCI.2018.2882089DOI Listing
http://www.ncbi.nlm.nih.gov/pmc/articles/PMC6867710PMC
March 2019

Algorithms and Analyses for Joint Spectral Image Reconstruction in Y-90 Bremsstrahlung SPECT.

IEEE Trans Med Imaging 2020 05 23;39(5):1369-1379. Epub 2019 Oct 23.

Quantitative yttrium-90 (Y-90) SPECT imaging is challenging due to the nature of Y-90, an almost pure beta emitter that is associated with a continuous spectrum of bremsstrahlung photons that have a relatively low yield. This paper proposes joint spectral reconstruction (JSR), a novel bremsstrahlung SPECT reconstruction method that uses multiple narrow acquisition windows with accurate multi-band forward modeling to cover a wide range of the energy spectrum. Theoretical analyses using Fisher information and Monte-Carlo (MC) simulation with a digital phantom show that the proposed JSR model with multiple acquisition windows has better performance in terms of covariance (precision) than previous methods using multi-band forward modeling with a single acquisition window, or using a single-band forward modeling with a single acquisition window. We also propose an energy-window subset (ES) algorithm for JSR to achieve fast empirical convergence and maximum-likelihood based initialization for all reconstruction methods to improve quantification accuracy in early iterations. For both MC simulation with a digital phantom and experimental study with a physical multi-sphere phantom, our proposed JSR-ES, a fast algorithm for JSR with ES, yielded higher recovery coefficients (RCs) on hot spheres over all iterations and sphere sizes than all the other evaluated methods, due to fast empirical convergence. In experimental study, for the smallest hot sphere (diameter 1.6cm), at the 20th iteration the increase in RCs with JSR-ES was 66 and 31% compared with single wide and narrow band forward models, respectively. JSR-ES also yielded lower residual count error (RCE) on a cold sphere over all iterations than other methods for MC simulation with known scatter, but led to greater RCE compared with single narrow band forward model at higher iterations for experimental study when using estimated scatter.
View Article and Find Full Text PDF

Download full-text PDF

Source
http://dx.doi.org/10.1109/TMI.2019.2949068DOI Listing
http://www.ncbi.nlm.nih.gov/pmc/articles/PMC7263381PMC
May 2020

DECT-MULTRA: Dual-Energy CT Image Decomposition With Learned Mixed Material Models and Efficient Clustering.

IEEE Trans Med Imaging 2020 04 8;39(4):1223-1234. Epub 2019 Oct 8.

Dual-energy computed tomography (DECT) imaging plays an important role in advanced imaging applications due to its material decomposition capability. Image-domain decomposition operates directly on CT images using linear matrix inversion, but the decomposed material images can be severely degraded by noise and artifacts. This paper proposes a new method dubbed DECT-MULTRA for image-domain DECT material decomposition that combines conventional penalized weighted-least squares (PWLS) estimation with regularization based on a mixed union of learned transforms (MULTRA) model. Our proposed approach pre-learns a union of common-material sparsifying transforms from patches extracted from all the basis materials, and a union of cross-material sparsifying transforms from multi-material patches. The common-material transforms capture the common properties among different material images, while the cross-material transforms capture the cross-dependencies. The proposed PWLS formulation is optimized efficiently by alternating between an image update step and a sparse coding and clustering step, with both of these steps having closed-form solutions. The effectiveness of our method is validated with both XCAT phantom and clinical head data. The results demonstrate that our proposed method provides superior material image quality and decomposition accuracy compared to other competing methods.
View Article and Find Full Text PDF

Download full-text PDF

Source
http://dx.doi.org/10.1109/TMI.2019.2946177DOI Listing
http://www.ncbi.nlm.nih.gov/pmc/articles/PMC7263375PMC
April 2020

Effect of source blur on digital breast tomosynthesis reconstruction.

Med Phys 2019 Dec 20;46(12):5572-5592. Epub 2019 Oct 20.

Department of Radiology, University of Michigan, Ann Arbor, MI, USA.

Purpose: Most digital breast tomosynthesis (DBT) reconstruction methods neglect the blurring of the projection views caused by the finite size or motion of the x-ray focal spot. This paper studies the effect of source blur on the spatial resolution of reconstructed DBT using analytical calculation and simulation, and compares the influence of source blur over a range of blurred source sizes.

Methods: Mathematically derived formulas describe the point spread function (PSF) of source blur on the detector plane as a function of the spatial locations of the finite-sized source and the object. By using the available technical parameters of some clinical DBT systems, we estimated the effective source sizes over a range of exposure time and DBT scan geometries. We used the CatSim simulation tool (GE Global Research, NY) to generate digital phantoms containing line pairs and beads at different locations and imaged with sources of four different sizes covering the range of potential source blur. By analyzing the relative contrasts of the test objects in the reconstructed images, we studied the effect of the source blur on the spatial resolution of DBT. Furthermore, we simulated a detector that rotated in synchrony with the source about the rotation center and calculated the spatial distribution of the blurring distance in the imaged volume to estimate its influence on source blur.

Results: Calculations demonstrate that the PSF is highly shift-variant, making it challenging to accurately implement during reconstruction. The results of the simulated phantoms demonstrated that a typical finite-sized focal spot (~0.3 mm) will not affect the reconstructed image resolution if the x-ray tube is stationary during data acquisition. If the x-ray tube moves during exposure, the extra blur due to the source motion may degrade image resolution, depending on the effective size of the source along the direction of the motion. A detector that rotates in synchrony with the source does not reduce the influence of source blur substantially.

Conclusions: This study demonstrates that the extra source blur due to the motion of the x-ray tube during image acquisition substantially degrades the reconstructed image resolution. This effect cannot be alleviated by rotating the detector in synchrony with the source. The simulation results suggest that there are potential benefits of modeling the source blur in image reconstruction for DBT systems using continuous-motion acquisition mode.
View Article and Find Full Text PDF

Download full-text PDF

Source
http://dx.doi.org/10.1002/mp.13801DOI Listing
http://www.ncbi.nlm.nih.gov/pmc/articles/PMC6899200PMC
December 2019

Convolutional Analysis Operator Learning: Acceleration and Convergence.

IEEE Trans Image Process 2020 2;29(1):2108-2122. Epub 2019 Sep 2.

Convolutional operator learning is gaining attention in many signal processing and computer vision applications. Learning kernels has mostly relied on so-called patch-domain approaches that extract and store many overlapping patches across training signals. Due to memory demands, patch-domain methods have limitations when learning kernels from large datasets - particularly with multi-layered structures, e.g., convolutional neural networks - or when applying the learned kernels to high-dimensional signal recovery problems. The so-called convolution approach does not store many overlapping patches, and thus overcomes the memory problems particularly with careful algorithmic designs; it has been studied within the "synthesis" signal model, e.g., convolutional dictionary learning. This paper proposes a new convolutional analysis operator learning (CAOL) framework that learns an analysis sparsifying regularizer with the convolution perspective, and develops a new convergent Block Proximal Extrapolated Gradient method using a Majorizer (BPEG-M) to solve the corresponding block multi-nonconvex problems. To learn diverse filters within the CAOL framework, this paper introduces an orthogonality constraint that enforces a tight-frame filter condition, and a regularizer that promotes diversity between filters. Numerical experiments show that, with sharp majorizers, BPEG-M significantly accelerates the CAOL convergence rate compared to the state-of-the-art block proximal gradient (BPG) method. Numerical experiments for sparse-view computational tomography show that a convolutional sparsifying regularizer learned via CAOL significantly improves reconstruction quality compared to a conventional edge-preserving regularizer. Using more and wider kernels in a learned regularizer better preserves edges in reconstructed images.
View Article and Find Full Text PDF

Download full-text PDF

Source
http://dx.doi.org/10.1109/TIP.2019.2937734DOI Listing
http://www.ncbi.nlm.nih.gov/pmc/articles/PMC7170176PMC
January 2020

SPULTRA: Low-Dose CT Image Reconstruction With Joint Statistical and Learned Image Models.

IEEE Trans Med Imaging 2020 03 12;39(3):729-741. Epub 2019 Aug 12.

Low-dose CT image reconstruction has been a popular research topic in recent years. A typical reconstruction method based on post-log measurements is called penalized weighted-least squares (PWLS). Due to the underlying limitations of the post-log statistical model, the PWLS reconstruction quality is often degraded in low-dose scans. This paper investigates a shifted-Poisson (SP) model based likelihood function that uses the pre-log raw measurements that better represents the measurement statistics, together with a data-driven regularizer exploiting a Union of Learned TRAnsforms (SPULTRA). Both the SP induced data-fidelity term and the regularizer in the proposed framework are nonconvex. The proposed SPULTRA algorithm uses quadratic surrogate functions for the SP induced data-fidelity term. Each iteration involves a quadratic subproblem for updating the image, and a sparse coding and clustering subproblem that has a closed-form solution. The SPULTRA algorithm has a similar computational cost per iteration as its recent counterpart PWLS-ULTRA that uses post-log measurements, and it provides better image reconstruction quality than PWLS-ULTRA, especially in low-dose scans.
View Article and Find Full Text PDF

Download full-text PDF

Source
http://dx.doi.org/10.1109/TMI.2019.2934933DOI Listing
http://www.ncbi.nlm.nih.gov/pmc/articles/PMC7170173PMC
March 2020

Simplified Statistical Image Reconstruction for X-ray CT With Beam-Hardening Artifact Compensation.

IEEE Trans Med Imaging 2020 01 10;39(1):111-118. Epub 2019 Jun 10.

CT images are often affected by beam-hardening artifacts due to the polychromatic nature of the X-ray spectra. These artifacts appear in the image as cupping in homogeneous areas and as dark bands between dense regions such as bones. This paper proposes a simplified statistical reconstruction method for X-ray CT based on Poisson statistics that accounts for the non-linearities caused by beam hardening. The main advantages of the proposed method over previous algorithms are that it avoids the preliminary segmentation step, which can be tricky, especially for low-dose scans, and it does not require knowledge of the whole source spectrum, which is often unknown. Each voxel attenuation is modeled as a mixture of bone and soft tissue by defining density-dependent tissue fractions and maintaining one unknown per voxel. We approximate the energy-dependent attenuation corresponding to different combinations of bone and soft tissues, the so-called beam-hardening function, with the 1D function corresponding to water plus two parameters that can be tuned empirically. Results on both simulated data with Poisson sinogram noise and two rodent studies acquired with the ARGUS/CT system showed a beam hardening reduction (both cupping and dark bands) similar to analytical reconstruction followed by post-processing techniques but with reduced noise and streaks in cases with a low number of projections, as expected for statistical image reconstruction.
View Article and Find Full Text PDF

Download full-text PDF

Source
http://dx.doi.org/10.1109/TMI.2019.2921929DOI Listing
http://www.ncbi.nlm.nih.gov/pmc/articles/PMC6995645PMC
January 2020

A GRAPPA algorithm for arbitrary 2D/3D non-Cartesian sampling trajectories with rapid calibration.

Magn Reson Med 2019 09 3;82(3):1101-1112. Epub 2019 May 3.

Department of Biomedical Engineering, University of Michigan, Ann Arbor, Michigan.

Purpose: GRAPPA is a popular reconstruction method for Cartesian parallel imaging, but is not easily extended to non-Cartesian sampling. We introduce a general and practical GRAPPA algorithm for arbitrary non-Cartesian imaging.

Methods: We formulate a general GRAPPA reconstruction by associating a unique kernel with each unsampled k-space location with a distinct constellation, that is, local sampling pattern. We calibrate these generalized kernels using the Fourier transform phase shift property applied to fully gridded or separately acquired Cartesian Autocalibration signal (ACS) data. To handle the resulting large number of different kernels, we introduce a fast calibration algorithm based on nonuniform FFT (NUFFT) and adoption of circulant ACS boundary conditions. We applied our method to retrospectively under-sampled rotated stack-of-stars/spirals in vivo datasets, and to a prospectively under-sampled rotated stack-of-spirals functional MRI acquisition with a finger-tapping task.

Results: We reconstructed all datasets without performing any trajectory-specific manual adaptation of the method. For the retrospectively under-sampled experiments, our method achieved image quality (i.e., error and g-factor maps) comparable to conjugate gradient SENSE (cg-SENSE) and SPIRiT. Functional activation maps obtained from our method were in good agreement with those obtained using cg-SENSE, but required a shorter total reconstruction time (for the whole time-series): 3 minutes (proposed) vs 15 minutes (cg-SENSE).

Conclusions: This paper introduces a general 3D non-Cartesian GRAPPA that is fast enough for practical use on today's computers. It is a direct generalization of original GRAPPA to non-Cartesian scenarios. The method should be particularly useful in dynamic imaging where a large number of frames are reconstructed from a single set of ACS data.
View Article and Find Full Text PDF

Download full-text PDF

Source
http://dx.doi.org/10.1002/mrm.27801DOI Listing
http://www.ncbi.nlm.nih.gov/pmc/articles/PMC6894481PMC
September 2019

Asymptotic performance of PCA for high-dimensional heteroscedastic data.

J Multivar Anal 2018 Sep 19;167:435-452. Epub 2018 Jun 19.

Department of Electrical Engineering and Computer Science University of Michigan, Ann Arbor, MI 48109, USA.

Principal Component Analysis (PCA) is a classical method for reducing the dimensionality of data by projecting them onto a subspace that captures most of their variation. Effective use of PCA in modern applications requires understanding its performance for data that are both high-dimensional and heteroscedastic. This paper analyzes the statistical performance of PCA in this setting, i.e., for high-dimensional data drawn from a low-dimensional subspace and degraded by heteroscedastic noise. We provide simplified expressions for the asymptotic PCA recovery of the underlying subspace, subspace amplitudes and subspace coefficients; the expressions enable both easy and efficient calculation and reasoning about the performance of PCA. We exploit the structure of these expressions to show that, for a fixed average noise variance, the asymptotic recovery of PCA for heteroscedastic data is always worse than that for homoscedastic data (i.e., for noise variances that are equal across samples). Hence, while average noise variance is often a practically convenient measure for the overall quality of data, it gives an overly optimistic estimate of the performance of PCA for heteroscedastic data.
View Article and Find Full Text PDF

Download full-text PDF

Source
http://dx.doi.org/10.1016/j.jmva.2018.06.002DOI Listing
http://www.ncbi.nlm.nih.gov/pmc/articles/PMC6377200PMC
September 2018

Real-Time Filtering with Sparse Variations for Head Motion in Magnetic Resonance Imaging.

Signal Processing 2019 Apr 3;157:170-179. Epub 2018 Dec 3.

University of Michigan, Ann Arbor, MI, USA 48109.

Estimating a time-varying signal, such as head motion from magnetic resonance imaging data, becomes particularly challenging in the face of other temporal dynamics such as functional activation. This paper describes a new Kalman filter-like framework that includes a sparse residual term in the measurement model. This additional term allows the extended Kalman filter to generate real-time motion estimates suitable for prospective motion correction when such dynamics occur. An iterative augmented Lagrangian algorithm similar to the alterating direction method of multipliers implements the update step for this Kalman filter. This paper evaluates the accuracy and convergence rate of this iterative method for small and large motion in terms of its sensitivity to parameter selection. The included experiment on a simulated functional magnetic resonance imaging acquisition demonstrates that the resulting method improves the maximum Youden's J index of the time series analysis by 2-3% versus retrospective motion correction, while the sensitivity index increases from 4.3 to 5.4 when combining prospective and retrospective correction.
View Article and Find Full Text PDF

Download full-text PDF

Source
http://dx.doi.org/10.1016/j.sigpro.2018.12.001DOI Listing
http://www.ncbi.nlm.nih.gov/pmc/articles/PMC6319923PMC
April 2019

Time of flight PET reconstruction using nonuniform update for regional recovery uniformity.

Med Phys 2019 Feb 4;46(2):649-664. Epub 2019 Jan 4.

Gordon Center for Medical Imaging, Massachusetts General Hospital and Harvard Medical School, 125 Nashua Street 6th floor, Suite 660, Boston, MA, 02114, USA.

Purpose: Time of flight (TOF) PET reconstruction is well known to statistically improve the image quality compared to non-TOF PET. Although TOF PET can improve the overall signal to noise ratio (SNR) of the image compared to non-TOF PET, the SNR disparity between separate regions in the reconstructed image using TOF data becomes higher than that using non-TOF data. Using the conventional ordered subset expectation maximization (OS-EM) method, the SNR in the low activity regions becomes significantly lower than in the high activity regions due to the different photon statistics of TOF bins. A uniform recovery across different SNR regions is preferred if it can yield an overall good image quality within small number of iterations in practice. To allow more uniform recovery of regions, a spatially variant update is necessary for different SNR regions.

Methods: This paper focuses on designing a spatially variant step size and proposes a TOF-PET reconstruction method that uses a nonuniform separable quadratic surrogates (NUSQS) algorithm, providing a straightforward control of spatially variant step size. To control the noise, a spatially invariant quadratic regularization is incorporated, which by itself does not theoretically affect the recovery uniformity. The Nesterov's momentum method with ordered subsets (OS) is also used to accelerate the reconstruction speed. To evaluate the proposed method, an XCAT simulation phantom and clinical data from a pancreas cancer patient with full (ground truth) and 6× downsampled counts were used, where a Poisson thinning process was employed for downsampling. We selected tumor and cold regions of interest (ROIs) and compared the proposed method with the TOF-based conventional OS-EM and OS-SQS algorithms with an early stopping criterion.

Results: In computer simulation, without regularization, hot regions of OS-EM and OS-NUSQS converged similarly, but cold region of OS-EM was noisier than OS-NUSQS after 24 iterations. With regularization, although the overall speeds of OS-EM and OS-NUSQS were similar, recovery ratios of hot and cold regions reconstructed by the OS-NUSQS were more uniform compared to those of the conventional OS-SQS and OS-EM. The OS-NUSQS with Nesterov's momentum converged faster than others while preserving the uniform recovery. In the clinical example, we demonstrated that the OS-NUSQS with Nesterov's momentum provides more uniform recovery ratios of hot and cold ROIs compared to the OS-SQS and OS-EM. Although the cost function of all methods is equivalent, the proposed method has higher structural similarity (SSIM) values of hot and cold regions compared to other methods after 24 iterations. Furthermore, our computing time using graphics processing unit was 80× shorter than the time using quad-core CPUs.

Conclusion: This paper proposes a TOF PET reconstruction method using the OS-NUSQS with Nesterov's momentum for uniform recovery of different SNR regions. In particular, the spatially nonuniform step size in the proposed method provides uniform recovery ratios of different SNR regions, and the Nesterov's momentum further accelerates overall convergence while preserving uniform recovery. The computer simulation and clinical example demonstrate that the proposed method converges uniformly across ROIs. In addition, tumor contrast and SSIM of the proposed method were higher than those of the conventional OS-EM and OS-SQS in early iterations.
View Article and Find Full Text PDF

Download full-text PDF

Source
http://dx.doi.org/10.1002/mp.13321DOI Listing
http://www.ncbi.nlm.nih.gov/pmc/articles/PMC6501218PMC
February 2019

Dictionary-Free MRI PERK: Parameter Estimation via Regression with Kernels.

IEEE Trans Med Imaging 2018 09 20;37(9):2103-2114. Epub 2018 Mar 20.

This paper introduces a fast, general method for dictionary-free parameter estimation in quantitative magnetic resonance imaging (QMRI) parameter estimation via regression with kernels (PERK). PERK first uses prior distributions and the nonlinear MR signal model to simulate many parameter-measurement pairs. Inspired by machine learning, PERK then takes these parameter-measurement pairs as labeled training points and learns from them a nonlinear regression function using kernel functions and convex optimization. PERK admits a simple implementation as per-voxel nonlinear lifting of MRI measurements followed by linear minimum mean-squared error regression. We demonstrate PERK for $ {\textit {T}_{1}}, {\textit {T}_{2}}$ estimation, a well-studied application where it is simple to compare PERK estimates against dictionary-based grid search estimates and iterative optimization estimates. Numerical simulations as well as single-slice phantom and in vivo experiments demonstrate that PERK and other tested methods produce comparable $ {\textit {T}_{1}}, {\textit {T}_{2}}$ estimates in white and gray matter, but PERK is consistently at least $140\times $ faster. This acceleration factor may increase by several orders of magnitude for full-volume QMRI estimation problems involving more latent parameters per voxel.
View Article and Find Full Text PDF

Download full-text PDF

Source
http://dx.doi.org/10.1109/TMI.2018.2817547DOI Listing
http://www.ncbi.nlm.nih.gov/pmc/articles/PMC7017957PMC
September 2018

PWLS-ULTRA: An Efficient Clustering and Learning-Based Approach for Low-Dose 3D CT Image Reconstruction.

IEEE Trans Med Imaging 2018 06;37(6):1498-1510

The development of computed tomography (CT) image reconstruction methods that significantly reduce patient radiation exposure, while maintaining high image quality is an important area of research in low-dose CT imaging. We propose a new penalized weighted least squares (PWLS) reconstruction method that exploits regularization based on an efficient Union of Learned TRAnsforms (PWLS-ULTRA). The union of square transforms is pre-learned from numerous image patches extracted from a dataset of CT images or volumes. The proposed PWLS-based cost function is optimized by alternating between a CT image reconstruction step, and a sparse coding and clustering step. The CT image reconstruction step is accelerated by a relaxed linearized augmented Lagrangian method with ordered-subsets that reduces the number of forward and back projections. Simulations with 2-D and 3-D axial CT scans of the extended cardiac-torso phantom and 3-D helical chest and abdomen scans show that for both normal-dose and low-dose levels, the proposed method significantly improves the quality of reconstructed images compared to PWLS reconstruction with a nonadaptive edge-preserving regularizer. PWLS with regularization based on a union of learned transforms leads to better image reconstructions than using a single learned square transform. We also incorporate patch-based weights in PWLS-ULTRA that enhance image quality and help improve image resolution uniformity. The proposed approach achieves comparable or better image quality compared to learned overcomplete synthesis dictionaries, but importantly, is much faster (computationally more efficient).
View Article and Find Full Text PDF

Download full-text PDF

Source
http://dx.doi.org/10.1109/TMI.2018.2832007DOI Listing
http://www.ncbi.nlm.nih.gov/pmc/articles/PMC6034686PMC
June 2018

Image Reconstruction is a New Frontier of Machine Learning.

IEEE Trans Med Imaging 2018 06;37(6):1289-1296

Over past several years, machine learning, or more generally artificial intelligence, has generated overwhelming research interest and attracted unprecedented public attention. As tomographic imaging researchers, we share the excitement from our imaging perspective [item 1) in the Appendix], and organized this special issue dedicated to the theme of "Machine learning for image reconstruction." This special issue is a sister issue of the special issue published in May 2016 of this journal with the theme "Deep learning in medical imaging" [item 2) in the Appendix]. While the previous special issue targeted medical image processing/analysis, this special issue focuses on data-driven tomographic reconstruction. These two special issues are highly complementary, since image reconstruction and image analysis are two of the main pillars for medical imaging. Together we cover the whole workflow of medical imaging: from tomographic raw data/features to reconstructed images and then extracted diagnostic features/readings.
View Article and Find Full Text PDF

Download full-text PDF

Source
http://dx.doi.org/10.1109/TMI.2018.2833635DOI Listing
June 2018

ANOTHER LOOK AT THE FAST ITERATIVE SHRINKAGE/THRESHOLDING ALGORITHM (FISTA).

SIAM J Optim 2018 30;28(1):223-250. Epub 2018 Jan 30.

Dept. of Electrical Engineering and Computer Science, University of Michigan, Ann Arbor, MI 48109 USA.

This paper provides a new way of developing the "Fast Iterative Shrinkage/Thresholding Algorithm (FISTA)" [3] that is widely used for minimizing composite convex functions with a nonsmooth term such as the ℓ regularizer. In particular, this paper shows that FISTA corresponds to an optimized approach to accelerating the proximal gradient method with respect to a worst-case bound of the cost function. This paper then proposes a new algorithm that is derived by instead optimizing the step coefficients of the proximal gradient method with respect to a worst-case bound of the composite gradient mapping. The proof is based on the worst-case analysis called Performance Estimation Problem in [11].
View Article and Find Full Text PDF

Download full-text PDF

Source
http://dx.doi.org/10.1137/16M108940XDOI Listing
http://www.ncbi.nlm.nih.gov/pmc/articles/PMC5966151PMC
January 2018

Y-90 SPECT ML image reconstruction with a new model for tissue-dependent bremsstrahlung production using CT information: a proof-of-concept study.

Phys Med Biol 2018 05 22;63(11):115001. Epub 2018 May 22.

Department of Electrical Engineering and Computer Science, University of Michigan, Ann Arbor, MI 48109, United States of America.

While the yield of positrons used in Y-90 PET is independent of tissue media, Y-90 SPECT imaging is complicated by the tissue dependence of bremsstrahlung photon generation. The probability of bremsstrahlung production is proportional to the square of the atomic number of the medium. Hence, the same amount of activity in different tissue regions of the body will produce different numbers of bremsstrahlung photons. Existing reconstruction methods disregard this tissue-dependency, potentially impacting both qualitative and quantitative imaging of heterogeneous regions of the body such as bone with marrow cavities. In this proof-of-concept study, we propose a new maximum-likelihood method that incorporates bremsstrahlung generation probabilities into the system matrix, enabling images of the desired Y-90 distribution to be reconstructed instead of the 'bremsstrahlung distribution' that is obtained with existing methods. The tissue-dependent probabilities are generated by Monte Carlo simulation while bone volume fractions for each SPECT voxel are obtained from co-registered CT. First, we demonstrate the tissue dependency in a SPECT/CT imaging experiment with Y-90 in bone equivalent solution and water. Visually, the proposed reconstruction approach better matched the true image and the Y-90 PET image than the standard bremsstrahlung reconstruction approach. An XCAT phantom simulation including bone and marrow regions also demonstrated better agreement with the true image using the proposed reconstruction method. Quantitatively, compared with the standard reconstruction, the new method improved estimation of the liquid bone:water activity concentration ratio by 40% in the SPECT measurement and the cortical bone:marrow activity concentration ratio by 58% in the XCAT simulation.
View Article and Find Full Text PDF

Download full-text PDF

Source
http://dx.doi.org/10.1088/1361-6560/aac1adDOI Listing
http://www.ncbi.nlm.nih.gov/pmc/articles/PMC6112241PMC
May 2018

Fast Spatial Resolution Analysis of Quadratic Penalized Least-Squares Image Reconstruction With Separate Real and Imaginary Roughness Penalty: Application to fMRI.

IEEE Trans Med Imaging 2018 02;37(2):604-614

Penalized least-squares iterative image reconstruction algorithms used for spatial resolution-limited imaging, such as functional magnetic resonance imaging (fMRI), commonly use a quadratic roughness penalty to regularize the reconstructed images. When used for complex-valued images, the conventional roughness penalty regularizes the real and imaginary parts equally. However, these imaging methods sometimes benefit from separate penalties for each part. The spatial smoothness from the roughness penalty on the reconstructed image is dictated by the regularization parameter(s). One method to set the parameter to a desired smoothness level is to evaluate the full width at half maximum of the reconstruction method's local impulse response. Previous work has shown that when using the conventional quadratic roughness penalty, one can approximate the local impulse response using an FFT-based calculation. However, that acceleration method cannot be applied directly for separate real and imaginary regularization. This paper proposes a fast and stable calculation for this case that also uses FFT-based calculations to approximate the local impulse responses of the real and imaginary parts. This approach is demonstrated with a quadratic image reconstruction of fMRI data that uses separate roughness penalties for the real and imaginary parts.
View Article and Find Full Text PDF

Download full-text PDF

Source
http://dx.doi.org/10.1109/TMI.2017.2768825DOI Listing
http://www.ncbi.nlm.nih.gov/pmc/articles/PMC5804832PMC
February 2018