121 |
Camera-independent learning and image quality assessment for super-resolutionBégin, Isabelle. January 2007 (has links)
An increasing number of applications require high-resolution images in situations where the access to the sensor and the knowledge of its specifications are limited. In this thesis, the problem of blind super-resolution is addressed, here defined as the estimation of a high-resolution image from one or more low-resolution inputs, under the condition that the degradation model parameters are unknown. The assessment of super-resolved results, using objective measures of image quality, is also addressed. / Learning-based methods have been successfully applied to the single frame super-resolution problem in the past. However, sensor characteristics such as the Point Spread Function (PSF) must often be known. In this thesis, a learning-based approach is adapted to work without the knowledge of the PSF thus making the framework camera-independent. However, the goal is not only to super-resolve an image under this limitation, but also to provide an estimation of the best PSF, consisting of a theoretical model with one unknown parameter. / In particular, two extensions of a method performing belief propagation on a Markov Random Field are presented. The first method finds the best PSF parameter by performing a search for the minimum mean distance between training examples and patches from the input image. In the second method, the best PSF parameter and the super-resolution result are found simultaneously by providing a range of possible PSF parameters from which the super-resolution algorithm will choose from. For both methods, a first estimate is obtained through blind deconvolution and an uncertainty is calculated in order to restrict the search. / Both camera-independent adaptations are compared and analyzed in various experiments, and a set of key parameters are varied to determine their effect on both the super-resolution and the PSF parameter recovery results. The use of quality measures is thus essential to quantify the improvements obtained from the algorithms. A set of measures is chosen that represents different aspects of image quality: the signal fidelity, the perceptual quality and the localization and scale of the edges. / Results indicate that both methods improve similarity to the ground truth and can in general refine the initial PSF parameter estimate towards the true value. Furthermore, the similarity measure results show that the chosen learning-based framework consistently improves a measure designed for perceptual quality.
|
122 |
Representing junctions through asymmetric tensor diffusionArseneau, Shawn. January 2006 (has links)
Gradient-based junctions form key features in such applications as object classification, motion segmentation, and image enhancement. Asymmetric junctions arise from the merging of an odd number of contour end-points such as at a 'Y' junction. Without an asymmetric representation of such a structure, it will be identified in the same category as 'X' junctions. This has severe consequences when distinguishing between features in object classification, discerning occlusion from disocclusion in motion segmentation and in properly modeling smoothing boundaries in image enhancement. / Current junction analysis methods include convolution, which applies a mask over a sub-region of the image, and diffusion, which propagates gradient information from point-to-point based on a set of rules. / A novel method is proposed that results in an improved approximation of the underlying contours, through the use of asymmetric junctions. The method combines the ability to represent asymmetric information, as do a number of convolution methods, with the robustness of local support obtained from diffusion schemes. This work investigates several different design paradigms of the asymmetric tensor diffusion algorithm. The proposed approach proved superior to existing techniques by properly accounting for asymmetric junctions over a wide range of scenarios.
|
123 |
Practical visual odometry for small embedded systemsSchaerer, Shawn S. 19 September 2006 (has links)
Localization and mapping are important abilities for any robot to have if it wants to navigate intelligently in the real world. The goal of the research designed in this thesis was to develop a practical embedded visual odometer that utilized common features found in real world environments. The visual odometer is a system that measures the self motion of a mobile robot using visual feeback. The developed visual odometer was tested on a custom mobile robot in several different tests that were derived from the robotic soccer domain. This system’s performance was compared to two other systems. These systems were a KLT feature tracker based robot and a commercial shaft encoder based robot. The results of the completed tests showed that the developed visual odometer’s performance was less than expected. It also showed that this system has good potential. As well, the test results showed the limitations of using a KLT feature tracker based robot and that the commercial shaft encoder based robot also had performance less than expected.
|
124 |
Spiral Architecture for Machine VisionJanuary 1996 (has links)
This thesis presents a new and powerful approach to the development of a general purpose machine vision system. The approach is inspired from anatomical considerations of the primate's vision system. The geometrical arrangement of cones on a primate's retina can be described in terms of a hexagonal grid. The importance of the hexagonal grid is that it possesses special computational features that are pertinent to the vision process. The fundamental thrust of this thesis emanates from the observation that this hexagonal grid can be described in terms of the mathematical object known as a Euclidean ring. The Euclidean ring is employed to generate an algebra of linear transformations which are appropriate for the processing of multidimensional vision data. A parallel autonomous segmentation algorithm for multidimensional vision data is described. The algebra and segmentation algorithm are implemented on a network of transputers. The implementation is discussed in the context of the outline of a general purpose machine vision system's design.
|
125 |
A logical formulation of the 3D reconstrucion problem using a volumetric frameworkRobinson, M. J. Unknown Date (has links)
No description available.
|
126 |
A super fast scanning technique for phased array weather radar applicationsLai, H. K. Unknown Date (has links)
No description available.
|
127 |
Efficient recursive factorization methods for determining structure from motion / Yanhau Li.Li, Yanhua January 2000 (has links)
Bibliography: leaves 100-110. / xiv, 110 leaves : ill. ; 30 cm. / Title page, contents and abstract only. The complete thesis in print form is available from the University Library. / This thesis addresses the structure from motion problem in computer vision. / Thesis (Ph.D.)--University of Adelaide, Dept. of Computer Science, 2000
|
128 |
A super fast scanning technique for phased array weather radar applicationsLai, H. K. Unknown Date (has links)
No description available.
|
129 |
Cooperative windowing for real-time visual tracking /Nassif, Samer Chaker. January 1997 (has links)
Thesis (Ph.D) -- McMaster University, 1997. / Includes bibliographical references (leaves 99-104). Also available via World Wide Web.
|
130 |
Combining silhouette and shading cues for model reconstructionLi, Shuda, January 2007 (has links)
Thesis (M. Phil.)--University of Hong Kong, 2008. / Also available in print.
|
Page generated in 0.0765 seconds