The goal of super-resolution is to increase not only the size of an image, but also its apparent resolution, making the result more plausible to human viewers. Many super-resolution methods do well at modest magnification factors, but even the best suffer from boundary and gradient artifacts at high magnification factors. This thesis presents Bayesian edge inference (BEI), a novel method grounded in Bayesian inference that does not suffer from these artifacts and remains competitive in published objective quality measures. BEI works by modeling the image capture process explicitly, including any downsampling, and modeling a fictional recapture process, which together allow principled control over blur. Scene modeling requires noncausal modeling within a causal framework, and an intuitive technique for that is given. Finally, BEI with trivial changes is shown to perform well on two tasks outside of its original domain—CCD demosaicing and inpainting—suggesting that the model generalizes well.
Identifer | oai:union.ndltd.org:BGMYU2/oai:scholarsarchive.byu.edu:etd-2838 |
Date | 11 March 2009 |
Creators | Toronto, Neil B. |
Publisher | BYU ScholarsArchive |
Source Sets | Brigham Young University |
Detected Language | English |
Type | text |
Format | application/pdf |
Source | Theses and Dissertations |
Rights | http://lib.byu.edu/about/copyright/ |
Page generated in 0.0308 seconds