Return to search

Point Spread Function Engineering for Scene Recovery

A computational camera uses a combination of optics and processing to produce images that cannot be captured with traditional cameras. Over the last decade, a range of computational cameras have been proposed, which use various optics designs to encode and use computation to decode useful visual information. What is often missing, however, is the quantitative analysis of the relation between camera design and the captured visual information, and little systematic work has been done to evaluate and optimize these computational camera designs. While the optics of computational cameras may be quite complicated, many of them can be effectively characterized by their point spread functions (PSFs): the intensity distribution on an image sensor as a response to a point light source in a scene. This thesis explores the techniques to characterize, evaluate and optimize computational cameras via PSF engineering for computer vision tasks, including image recovery, 3D reconstruction, and image refocusing. I first explore PSF engineering techniques to recover image details from blurry images. Image blur is a problem commonly seen in photography due to a number of reasons, including defocus, lens aberration, diffraction, atmospheric turbulence, object motion, and etc. Image blur is often formulated as a convolution of the latent blur-free image and a PSF, and deconvolution (or deblurring) techniques have to be used to recover image details from a blurry region. Here, I propose a comprehensive framework of PSF evaluation for the purpose of image deblurring, in which the effects of image noise, deblurring algorithm, and the structure of natural images are all taken into account. In defocus blur, the shape of defocus PSF is largely determined by the aperture pattern of the camera lens. By using an evaluation criterion derived from the comprehensive framework, I optimize the aperture pattern to preserve many more image details when defocus occurs. Both through simulations and experiments, I demonstrate the significant improvement gained by using optimized aperture patterns. While defocus blur causes a loss in image detail, it also encodes depth information of scenes in images. A typical depth from defocus (DFD) technique computes depth from two or more images that are captured with circular aperture lenses of different focus settings. Circular apertures produce circular defocus PSFs. In this thesis, I show that the use of a circular aperture severely restricts the accuracy of DFD, and propose a comprehensive framework of PSF evaluation for depth recovery. With this framework, we can derive a criterion for evaluating a pair of apertures with respect to the precision of depth recovery. This criterion is optimized using a genetic algorithm and gradient descent search to arrive at a pair of high resolution apertures. The two coded apertures are found to complement each other in the scene frequencies they preserve. With this property it becomes possible to not only recover depth with greater fidelity but also to obtain a high quality all-focused image from the two defocused images. While depth recovery can significantly benefit from optimized aperture patterns, its overall performance is rigidly limited by the lens aperture's physical size. To transcend this limitation, I propose a novel depth recovery technique using an optical diffuser - referred to as depth from diffusion (DFDiff). I show that DFDiff is analogous to conventional DFD, in which the scatter angle of the diffuser determines the system's effective aperture. High precision depth estimation can be achieved by choosing a proper diffuser and no longer requires the large lenses that DFD requires. Even a consumer camera with a low-end small lens can be used to do high-precision depth estimation when coupled with an optical diffuser. In my detailed analysis of the image formation properties of a DFDiff system, I show a number of examples demonstrating greater precision in depth estimation when using DFDiff. While the finite depth of field (DOF) of a lens camera leads to defocus blur, it also produces artistic visual experience. Many of today's displays are interactive in nature, which opens up a possibility for new kind of visual representations. Users could, for example, interactively refocus images to different depths, so that they can experience the artistic narrow DOF images while simultaneously making available the image detail for the entire image. To enable image refocusing, one typical approach is to capture the entire light field. But this method has the drawback of a significant sacrifice in spatial resolution due to the dimensionality gap: the captured information (light field) is 4D, while the required information (focal stack) is only 3D.In this thesis, I present an imaging system that directly captures focal stacks by a sweeping focal plane. First, I describe how to synchronize focus sweeping with image capturing so that the summed DOF of a focal stack efficiently covers the entire depth range. Then, I take a customized algorithm to enable a seamless refocusing experience, even in textureless regions or with moving objects. Prototype cameras are presented to capture real space-time focal stacks. There is also an interactive refocusing viewer available online at www.focalsweep.com.

Identiferoai:union.ndltd.org:columbia.edu/oai:academiccommons.columbia.edu:10.7916/D8PR836S
Date January 2013
CreatorsZhou, Changyin
Source SetsColumbia University
LanguageEnglish
Detected LanguageEnglish
TypeTheses

Page generated in 0.0019 seconds