Publications by authors named "Guangqi Hou"

3 Publications

  • Page 1 of 1

Binocular Light-Field: Imaging Theory and Occlusion-Robust Depth Perception Application.

IEEE Trans Image Process 2019 Sep 27. Epub 2019 Sep 27.

Binocular stereo vision (SV) has been widely used to reconstruct the depth information, but it is quite vulnerable to scenes with strong occlusions. As an emerging computational photography technology, light-field (LF) imaging brings about a novel solution to passive depth perception by recording multiple angular views in a single exposure. In this paper, we explore binocular SV and LF imaging to form the binocular-LF imaging system. An imaging theory is derived by modeling the imaging process and analyzing disparity properties based on the geometrical optics theory. Then an accurate occlusion-robust depth estimation algorithm is proposed by exploiting multibaseline stereo matching cues and defocus cues. The occlusions caused by binocular SV and LF imaging are detected and handled to eliminate the matching ambiguities and outliers. Finally, we develop a binocular-LF database and capture realworld scenes by our binocular-LF system to test the accuracy and robustness. The experimental results demonstrate that the proposed algorithm definitely recovers high quality depth maps with smooth surfaces and precise geometric shapes, which tackles the drawbacks of binocular SV and LF imaging simultaneously.
View Article and Find Full Text PDF

Download full-text PDF

Source
http://dx.doi.org/10.1109/TIP.2019.2943019DOI Listing
September 2019

LFNet: A Novel Bidirectional Recurrent Convolutional Neural Network for Light-Field Image Super-Resolution.

IEEE Trans Image Process 2018 Sep;27(9):4274-4286

The low spatial resolution of light-field image poses significant difficulties in exploiting its advantage. To mitigate the dependency of accurate depth or disparity information as priors for light-field image super-resolution, we propose an implicitly multi-scale fusion scheme to accumulate contextual information from multiple scales for super-resolution reconstruction. The implicitly multi-scale fusion scheme is then incorporated into bidirectional recurrent convolutional neural network, which aims to iteratively model spatial relations between horizontally or vertically adjacent sub-aperture images of light-field data. Within the network, the recurrent convolutions are modified to be more effective and flexible in modeling the spatial correlations between neighboring views. A horizontal sub-network and a vertical sub-network of the same network structure are ensembled for final outputs via stacked generalization. Experimental results on synthetic and real-world data sets demonstrate that the proposed method outperforms other state-of-the-art methods by a large margin in peak signal-to-noise ratio and gray-scale structural similarity indexes, which also achieves superior quality for human visual systems. Furthermore, the proposed method can enhance the performance of light field applications such as depth estimation.
View Article and Find Full Text PDF

Download full-text PDF

Source
http://dx.doi.org/10.1109/TIP.2018.2834819DOI Listing
September 2018

High-resolution light field reconstruction using a hybrid imaging system.

Appl Opt 2016 04;55(10):2580-93

Recently, light field cameras have drawn much attraction for their innovative performance in photographic and scientific applications. However, narrow baselines and constrained spatial resolution of current light field cameras impose restrictions on their usability. Therefore, we design a hybrid imaging system containing a light field camera and a high-resolution digital single lens reflex camera, and these two kinds of cameras share the same optical path with a beam splitter so as to achieve the reconstruction of high-resolution light fields. The high-resolution 4D light fields are reconstructed with a phase-based perspective variation strategy. First, we apply complex steerable pyramid decomposition on the high-resolution image from the digital single lens reflex camera. Then, we perform phase-based perspective-shift processing with the disparity value, which is extracted from the upsampled light field depth map, to create high-resolution synthetic light field images. High-resolution digital refocused images and high-resolution depth maps can be generated in this way. Furthermore, controlling the magnitude of the perspective shift enables us to change the depth of field rendering in the digital refocused images. We show several experimental results to demonstrate the effectiveness of our approach.
View Article and Find Full Text PDF

Download full-text PDF

Source
http://dx.doi.org/10.1364/AO.55.002580DOI Listing
April 2016