Publications by authors named "Abderrahim Halimi"

13 Publications

  • Page 1 of 1

High-speed object detection with a single-photon time-of-flight image sensor.

Opt Express 2021 Oct;29(21):33184-33196

3D time-of-flight (ToF) imaging is used in a variety of applications such as augmented reality (AR), computer interfaces, robotics and autonomous systems. Single-photon avalanche diodes (SPADs) are one of the enabling technologies providing accurate depth data even over long ranges. By developing SPADs in array format with integrated processing combined with pulsed, flood-type illumination, high-speed 3D capture is possible. However, array sizes tend to be relatively small, limiting the lateral resolution of the resulting depth maps and, consequently, the information that can be extracted from the image for applications such as object detection. In this paper, we demonstrate that these limitations can be overcome through the use of convolutional neural networks (CNNs) for high-performance object detection. We present outdoor results from a portable SPAD camera system that outputs 16-bin photon timing histograms with 64×32 spatial resolution, with each histogram containing thousands of photons. The results, obtained with exposure times down to 2 ms (equivalent to 500 FPS) and in signal-to-background (SBR) ratios as low as 0.05, point to the advantages of providing the CNN with full histogram data rather than point clouds alone. Alternatively, a combination of point cloud and active intensity data may be used as input, for a similar level of performance. In either case, the GPU-accelerated processing time is less than 1 ms per frame, leading to an overall latency (image acquisition plus processing) in the millisecond range, making the results relevant for safety-critical computer vision applications which would benefit from faster than human reaction times.
View Article and Find Full Text PDF

Download full-text PDF

Source
http://dx.doi.org/10.1364/OE.435619DOI Listing
October 2021

Multivariate semi-blind deconvolution of fMRI time series.

Neuroimage 2021 11 22;241:118418. Epub 2021 Jul 22.

CEA, DRF/Joliot, NeuroSpin, Université Paris-Saclay, Gif-sur-Yvette F-91191, France; Parietal Team, Université Paris-Saclay, CEA, Inria, Gif-sur-Yvette 91190, France. Electronic address:

Whole brain estimation of the haemodynamic response function (HRF) in functional magnetic resonance imaging (fMRI) is critical to get insight on the global status of the neurovascular coupling of an individual in healthy or pathological condition. Most of existing approaches in the literature works on task-fMRI data and relies on the experimental paradigm as a surrogate of neural activity, hence remaining inoperative on resting-stage fMRI (rs-fMRI) data. To cope with this issue, recent works have performed either a two-step analysis to detect large neural events and then characterize the HRF shape or a joint estimation of both the neural and haemodynamic components in an univariate fashion. In this work, we express the neural activity signals as a combination of piece-wise constant temporal atoms associated with sparse spatial maps and introduce an haemodynamic parcellation of the brain featuring a temporally dilated version of a given HRF model in each parcel with unknown dilation parameters. We formulate the joint estimation of the HRF shapes and spatio-temporal neural representations as a multivariate semi-blind deconvolution problem in a paradigm-free setting and introduce constraints inspired from the dictionary learning literature to ease its identifiability. A fast alternating minimization algorithm, along with its efficient implementation, is proposed and validated on both synthetic and real rs-fMRI data at the subject level. To demonstrate its significance at the population level, we apply this new framework to the UK Biobank data set, first for the discrimination of haemodynamic territories between balanced groups (n=24 individuals in each) patients with an history of stroke and healthy controls and second, for the analysis of normal aging on the neurovascular coupling. Overall, we statistically demonstrate that a pathology like stroke or a condition like normal brain aging induce longer haemodynamic delays in certain brain areas (e.g. Willis polygon, occipital, temporal and frontal cortices) and that this haemodynamic feature may be predictive with an accuracy of 74 % of the individual's age in a supervised classification task performed on n=459 subjects.
View Article and Find Full Text PDF

Download full-text PDF

Source
http://dx.doi.org/10.1016/j.neuroimage.2021.118418DOI Listing
November 2021

Robust real-time 3D imaging of moving scenes through atmospheric obscurant using single-photon LiDAR.

Sci Rep 2021 May 27;11(1):11236. Epub 2021 May 27.

School of Engineering and Physical Sciences, Heriot-Watt University, Edinburgh, EH14 4AS, UK.

Recently, time-of-flight LiDAR using the single-photon detection approach has emerged as a potential solution for three-dimensional imaging in challenging measurement scenarios, such as over distances of many kilometres. The high sensitivity and picosecond timing resolution afforded by single-photon detection offers high-resolution depth profiling of remote, complex scenes while maintaining low power optical illumination. These properties are ideal for imaging in highly scattering environments such as through atmospheric obscurants, for example fog and smoke. In this paper we present the reconstruction of depth profiles of moving objects through high levels of obscurant equivalent to five attenuation lengths between transceiver and target at stand-off distances up to 150 m. We used a robust statistically based processing algorithm designed for the real time reconstruction of single-photon data obtained in the presence of atmospheric obscurant, including providing uncertainty estimates in the depth reconstruction. This demonstration of real-time 3D reconstruction of moving scenes points a way forward for high-resolution imaging from mobile platforms in degraded visual environments.
View Article and Find Full Text PDF

Download full-text PDF

Source
http://dx.doi.org/10.1038/s41598-021-90587-8DOI Listing
http://www.ncbi.nlm.nih.gov/pmc/articles/PMC8159934PMC
May 2021

Robust super-resolution depth imaging via a multi-feature fusion deep network.

Opt Express 2021 Apr;29(8):11917-11937

The number of applications that use depth imaging is increasing rapidly, e.g. self-driving autonomous vehicles and auto-focus assist on smartphone cameras. Light detection and ranging (LIDAR) via single-photon sensitive detector (SPAD) arrays is an emerging technology that enables the acquisition of depth images at high frame rates. However, the spatial resolution of this technology is typically low in comparison to the intensity images recorded by conventional cameras. To increase the native resolution of depth images from a SPAD camera, we develop a deep network built to take advantage of the multiple features that can be extracted from a camera's histogram data. The network is designed for a SPAD camera operating in a dual-mode such that it captures alternate low resolution depth and high resolution intensity images at high frame rates, thus the system does not require any additional sensor to provide intensity images. The network then uses the intensity images and multiple features extracted from down-sampled histograms to guide the up-sampling of the depth. Our network provides significant image resolution enhancement and image denoising across a wide range of signal-to-noise ratios and photon levels. Additionally, we show that the network can be applied to other data types of SPAD data, demonstrating the generality of the algorithm.
View Article and Find Full Text PDF

Download full-text PDF

Source
http://dx.doi.org/10.1364/OE.415563DOI Listing
April 2021

3D LIDAR imaging using Ge-on-Si single-photon avalanche diode detectors.

Opt Express 2020 Jan;28(2):1330-1344

We present a scanning light detection and ranging (LIDAR) system incorporating an individual Ge-on-Si single-photon avalanche diode (SPAD) detector for depth and intensity imaging in the short-wavelength infrared region. The time-correlated single-photon counting technique was used to determine the return photon time-of-flight for target depth information. In laboratory demonstrations, depth and intensity reconstructions were made of targets at short range, using advanced image processing algorithms tailored for the analysis of single-photon time-of-flight data. These laboratory measurements were used to predict the performance of the single-photon LIDAR system at longer ranges, providing estimations that sub-milliwatt average power levels would be required for kilometer range depth measurements.
View Article and Find Full Text PDF

Download full-text PDF

Source
http://dx.doi.org/10.1364/OE.383243DOI Listing
January 2020

Learning Non-Local Spatial Correlations To Restore Sparse 3D Single-Photon Data.

IEEE Trans Image Process 2019 Dec 11. Epub 2019 Dec 11.

This paper presents a new algorithm for the learning of spatial correlation and non-local restoration of single-photon 3-Dimensional Lidar images acquired in the photon starved regime (fewer or less than one photon per pixel) or with a reduced number of scanned spatial points (pixels). The algorithm alternates between three steps: (i) extract multi-scale information, (ii) build a robust graph of non-local spatial correlations between pixels, and (iii) the restoration of depth and reflectivity images. A non-uniform sampling approach, which assigns larger patches to homogeneous regions and smaller ones to heterogeneous regions, is adopted to reduce the computational cost associated with the graph. The restoration of the 3D images is achieved by minimizing a cost function accounting for the multi-scale information and the non-local spatial correlation between patches. This minimization problem is efficiently solved using the alternating direction method of multipliers (ADMM) that presents fast convergence properties. Various results based on simulated and real Lidar data show the benefits of the proposed algorithm that improves the quality of the estimated depth and reflectivity images, especially in the photon-starved regime or when containing a reduced number of spatial points.
View Article and Find Full Text PDF

Download full-text PDF

Source
http://dx.doi.org/10.1109/TIP.2019.2957918DOI Listing
December 2019

Long-range depth imaging using a single-photon detector array and non-local data fusion.

Sci Rep 2019 May 30;9(1):8075. Epub 2019 May 30.

School of Engineering and Physical Sciences, Heriot-Watt University, Edinburgh, EH14 4AS, UK.

The ability to measure and record high-resolution depth images at long stand-off distances is important for a wide range of applications, including connected and automotive vehicles, defense and security, and agriculture and mining. In LIDAR (light detection and ranging) applications, single-photon sensitive detection is an emerging approach, offering high sensitivity to light and picosecond temporal resolution, and consequently excellent surface-to-surface resolution. The use of large format CMOS (complementary metal-oxide semiconductor) single-photon detector arrays provides high spatial resolution and allows the timing information to be acquired simultaneously across many pixels. In this work, we combine state-of-the-art single-photon detector array technology with non-local data fusion to generate high resolution three-dimensional depth information of long-range targets. The system is based on a visible pulsed illumination system at a wavelength of 670 nm and a 240 × 320 array sensor, achieving sub-centimeter precision in all three spatial dimensions at a distance of 150 meters. The non-local data fusion combines information from an optical image with sparse sampling of the single-photon array data, providing accurate depth information at low signature regions of the target.
View Article and Find Full Text PDF

Download full-text PDF

Source
http://dx.doi.org/10.1038/s41598-019-44316-xDOI Listing
http://www.ncbi.nlm.nih.gov/pmc/articles/PMC6542841PMC
May 2019

Three-dimensional single-photon imaging through obscurants.

Opt Express 2019 Feb;27(4):4590-4611

We investigate the depth imaging of objects through various densities of different obscurants (water fog, glycol-based vapor, and incendiary smoke) using a time-correlated single-photon detection system which had an operating wavelength of 1550 nm and an average optical output power of approximately 1.5 mW. It consisted of a monostatic scanning transceiver unit used in conjunction with a picosecond laser source and an individual Peltier-cooled InGaAs/InP single-photon avalanche diode (SPAD) detector. We acquired depth and intensity data of targets imaged through distances of up to 24 meters for the different obscurants. We compare several statistical algorithms which reconstruct both the depth and intensity images for short data acquisition times, including very low signal returns in the photon-starved regime.
View Article and Find Full Text PDF

Download full-text PDF

Source
http://dx.doi.org/10.1364/OE.27.004590DOI Listing
February 2019

High-resolution depth profiling using a range-gated CMOS SPAD quanta image sensor.

Opt Express 2018 Mar;26(5):5541-5557

A CMOS single-photon avalanche diode (SPAD) quanta image sensor is used to reconstruct depth and intensity profiles when operating in a range-gated mode used in conjunction with pulsed laser illumination. By designing the CMOS SPAD array to acquire photons within a pre-determined temporal gate, the need for timing circuitry was avoided and it was therefore possible to have an enhanced fill factor (61% in this case) and a frame rate (100,000 frames per second) that is more difficult to achieve in a SPAD array which uses time-correlated single-photon counting. When coupled with appropriate image reconstruction algorithms, millimeter resolution depth profiles were achieved by iterating through a sequence of temporal delay steps in synchronization with laser illumination pulses. For photon data with high signal-to-noise ratios, depth images with millimeter scale depth uncertainty can be estimated using a standard cross-correlation approach. To enhance the estimation of depth and intensity images in the sparse photon regime, we used a bespoke clustering-based image restoration strategy, taking into account the binomial statistics of the photon data and non-local spatial correlations within the scene. For sparse photon data with total exposure times of 75 ms or less, the bespoke algorithm can reconstruct depth images with millimeter scale depth uncertainty at a stand-off distance of approximately 2 meters. We demonstrate a new approach to single-photon depth and intensity profiling using different target scenes, taking full advantage of the high fill-factor, high frame rate and large array format of this range-gated CMOS SPAD array.
View Article and Find Full Text PDF

Download full-text PDF

Source
http://dx.doi.org/10.1364/OE.26.005541DOI Listing
March 2018

Single-photon three-dimensional imaging at up to 10 kilometers range.

Opt Express 2017 May;25(10):11919-11931

Depth and intensity profiling of targets at a range of up to 10 km is demonstrated using time-of-flight time-correlated single-photon counting technique. The system comprised a pulsed laser source at 1550 nm wavelength, a monostatic scanning transceiver and a single-element InGaAs/InP single-photon avalanche diode (SPAD) detector. High-resolution three-dimensional images of various targets acquired over ranges between 800 metres and 10.5 km demonstrate long-range depth and intensity profiling, feature extraction and the potential for target recognition. Using a total variation restoration optimization algorithm, the acquisition time necessary for each pixel could be reduced by at least a factor of ten compared to a pixel-wise image processing approach. Kilometer range depth profiles are reconstructed with average signal returns of less than one photon per pixel.
View Article and Find Full Text PDF

Download full-text PDF

Source
http://dx.doi.org/10.1364/OE.25.011919DOI Listing
May 2017

Hyperspectral Unmixing in Presence of Endmember Variability, Nonlinearity, or Mismodeling Effects.

IEEE Trans Image Process 2016 10 11;25(10):4565-79. Epub 2016 Jul 11.

This paper presents three hyperspectral mixture models jointly with Bayesian algorithms for supervised hyperspectral unmixing. Based on the residual component analysis model, the proposed general formulation assumes the linear model to be corrupted by an additive term whose expression can be adapted to account for nonlinearities (NLs), endmember variability (EV), or mismodeling effects (MEs). The NL effect is introduced by considering a polynomial expression that is related to bilinear models. The proposed new formulation of EV accounts for shape and scale endmember changes while enforcing a smooth spectral/spatial variation. The ME formulation considers the effect of outliers and copes with some types of EV and NL. The known constraints on the parameter of each observation model are modeled via suitable priors. The posterior distribution associated with each Bayesian model is optimized using a coordinate descent algorithm, which allows the computation of the maximum a posteriori estimator of the unknown model parameters. The proposed mixture and Bayesian models and their estimation algorithms are validated on both synthetic and real images showing competitive results regarding the quality of the inferences and the computational complexity, when compared with the state-of-the-art algorithms.
View Article and Find Full Text PDF

Download full-text PDF

Source
http://dx.doi.org/10.1109/TIP.2016.2590324DOI Listing
October 2016

Unsupervised Unmixing of Hyperspectral Images Accounting for Endmember Variability.

IEEE Trans Image Process 2015 Dec 21;24(12):4904-17. Epub 2015 Aug 21.

This paper presents an unsupervised Bayesian algorithm for hyperspectral image unmixing, accounting for endmember variability. The pixels are modeled by a linear combination of endmembers weighted by their corresponding abundances. However, the endmembers are assumed random to consider their variability in the image. An additive noise is also considered in the proposed model, generalizing the normal compositional model. The proposed algorithm exploits the whole image to benefit from both spectral and spatial information. It estimates both the mean and the covariance matrix of each endmember in the image. This allows the behavior of each material to be analyzed and its variability to be quantified in the scene. A spatial segmentation is also obtained based on the estimated abundances. In order to estimate the parameters associated with the proposed Bayesian model, we propose to use a Hamiltonian Monte Carlo algorithm. The performance of the resulting unmixing strategy is evaluated through simulations conducted on both synthetic and real data.
View Article and Find Full Text PDF

Download full-text PDF

Source
http://dx.doi.org/10.1109/TIP.2015.2471182DOI Listing
December 2015

Supervised nonlinear spectral unmixing using a postnonlinear mixing model for hyperspectral imagery.

IEEE Trans Image Process 2012 Jun 13;21(6):3017-25. Epub 2012 Feb 13.

University of Toulouse, IRIT/INP-ENSEEIHT/TeSA, Toulouse, France.

This paper presents a nonlinear mixing model for hyperspectral image unmixing. The proposed model assumes that the pixel reflectances are nonlinear functions of pure spectral components contaminated by an additive white Gaussian noise. These nonlinear functions are approximated using polynomial functions leading to a polynomial postnonlinear mixing model. A Bayesian algorithm and optimization methods are proposed to estimate the parameters involved in the model. The performance of the unmixing strategies is evaluated by simulations conducted on synthetic and real data.
View Article and Find Full Text PDF

Download full-text PDF

Source
http://dx.doi.org/10.1109/TIP.2012.2187668DOI Listing
June 2012
-->