Return to search

Extending Depth of Field via Multifocus Fusion

In digital imaging systems, due to the nature of the optics involved, the depth of field is constricted in the field of view. Parts of the scene are in focus while others are defocused. Here, a framework of versatile data-driven application independent methods to extend the depth of field in digital imaging systems is presented. The principal contributions in this effort are the use of focal connectivity, the direct use of curvelets and features extracted by Empirical Mode Decomposition, namely Intrinsic Mode Images, for multifocus fusion. The input images are decomposed into focally connected components, peripheral and medial coefficients and intrinsic mode images depending on the approach and fusion is performed on extracted focal information, by relevant schema that allow emphasis of focused regions from each input image. The fused image unifies information from all focal planes, while maintaining the verisimilitude of the scene. The final output is an image where all focal volumes of the scene are in focus, as acquired by a pinhole camera with an infinitesimal depth of field. In order to validate the fusion performance of our method, we have compared our results with those of region-based and multiscale decomposition based fusion techniques. Several illustrative examples, supported by in depth objective comparisons are shown and various practical recommendations are made.

Identiferoai:union.ndltd.org:UTENN/oai:trace.tennessee.edu:utk_graddiss-2314
Date01 December 2011
CreatorsHariharan, Harishwaran
PublisherTrace: Tennessee Research and Creative Exchange
Source SetsUniversity of Tennessee Libraries
Detected LanguageEnglish
Typetext
Formatapplication/pdf
SourceDoctoral Dissertations

Page generated in 0.0026 seconds