Spelling suggestions: "subject:"image"" "subject:"lmage""
741 |
A No-reference Image Enhancement Quality Metric and Fusion TechniqueHeadlee, Jonathan Michael 27 May 2015 (has links)
No description available.
|
742 |
Body image as a function of social comparison, self-schema, and self-discrepancy /Jung, Jaehee January 1999 (has links)
No description available.
|
743 |
An experimental study of a binocular vision system for rough terrain locomotion of a hexapod walking robot /Tsai, Sheng-Jen January 1983 (has links)
No description available.
|
744 |
Plant identification using color co-occurrence matrices derived from digitized images /Shearer, Scott A. January 1987 (has links)
No description available.
|
745 |
An exploration of developmental relationships between children's body image boundaries, estimates of dimensions of body space, and performance of selected gross motor tasks /Woods, Marcella Darlene January 1967 (has links)
No description available.
|
746 |
An investigation of Boolean image neighborhood transformations /Miller, Peter Edwin January 1978 (has links)
No description available.
|
747 |
Image Analysis and Segmentation Based on the Circular Pipeline Video ProcessorAlbritton, Jon M. 01 January 1984 (has links) (PDF)
Visual inspection of printed circuit boards has generally depended on human inspectors. However, a system has been developed which allows for automated visual inspection using robotics and modern image processing techniques. This paper first introduces automatic visual inspection processes, overviews the Automatic Board Assembly, Inspection and Test (ABAIT) system, reviews image processing concepts and describes the Circular Pipeline Video Processor (CPVP). Image data from the CPVP is analyzed and an investigation into alternate segmentation algorithms to identify circuit board features is presented. The relative performance of these algorithms is compared conclusions drawn.
|
748 |
Fast Screening Algorithm for Template MatchingLiu, Bolin January 2017 (has links)
This paper presents a generic pre-processor for expediting
conventional template matching techniques. Instead of locating the
best matched patch in the reference image to a query template via
exhaustive search, the proposed algorithm rules out regions with no
possible matches with minimum computational efforts. While working
on simple patch features, such as mean, variance and gradient, the
fast pre-screening is highly discriminative. Its computational
efficiency is gained by using a novel octagonal-star-shaped template
and the inclusion-exclusion principle to extract and compare patch
features. Moreover, it can handle arbitrary rotation and scaling of
reference images effectively, and also be robust to uniform
illumination changes. GPU-aided implementation shows great efficiency
of parallel computing in the algorithm design, and extensive
experiments demonstrate that the proposed algorithm greatly reduces
the search space while never missing the best match. / Thesis / Master of Applied Science (MASc)
|
749 |
Bad Weather Effect Removal in Images and VideosKan, Pengfei January 2018 (has links)
Commonly experienced bad weather conditions like fog, snow and rain generate pixel intensity changes in images and videos taken in outdoor environment and impair the performance of algorithms in outdoor vision systems. Hence, the impact of bad weather conditions need to be processed to improve the performance of outdoor vision systems.
This thesis focuses on three most common weather conditions: fog, snow and rain. Their physical properties are first analyzed. Based on their properties, traditional methods are introduced individually to remove these weather conditions' effect on images or videos. For fog removal, the scattering model is used to describe the fog scene in images and estimate the clear scene radiance from single input images. In this thesis two scenario are discussed, one with videos and the other with single images. The removal of snow and rain in videos is easier than in single images. In videos, temporal and chromatic properties of snow and rain can be used to remove their impact. While in single images, traditional methods with edge preserving filters were discussed.
However, there are multiple limitations of traditional methods that are based on physical properties of bad weather conditions. Each of them can only deal with one specific weather condition at a time. In real application scenarios, it is difficult for vision systems to recognize different weather conditions and choose corresponding methods to remove them. Therefore, machine learning methods have advantages compared with traditional methods. In this thesis, Generative Adversarial Network (GAN) is used to remove the effect of these weather conditions. GAN performs the image to image translation instead of analyzing the physical properties of different weather conditions. It gets impressive results to deal with different weather conditions. / Thesis / Master of Applied Science (MASc)
|
750 |
IMAGE RESTORATIONS USING DEEP LEARNING TECHNIQUESChi, Zhixiang January 2018 (has links)
Conventional methods for solving image restoration problems are typically built on an image degradation model and on some priors of the latent image. The model of the degraded image and the prior knowledge of the latent image are necessary because the restoration is an ill posted inverse problem. However, for some applications, such as those addressed in this thesis, the image degradation process is too complex to model precisely; in addition, mathematical priors, such as low rank and sparsity of the image signal, are often too idealistic for real world images. These difficulties limit the performance of existing image restoration algorithms, but they can be, to certain extent, overcome by the techniques of machine learning, particularly deep convolutional neural networks. Machine learning allows large sample statistics far beyond what is available in a single input image to be exploited. More importantly, the big data can be used to train deep neural networks to learn the complex non-linear mapping between the degraded and original images. This circumvents the difficulty of building an explicit realistic mathematical model when the degradation causes are complex and compounded.
In this thesis, we design and implement deep convolutional neural networks (DCNN) for two challenging image restoration problems: reflection removal and joint demosaicking-deblurring. The first problem is one of blind source separation; its DCNN solution requires a large set of paired clean and mixed images for training. As these paired training images are very difficult, if not impossible, to acquire in the real world, we develop a novel technique to synthesize the required training images that satisfactorily approximate the real ones. For the joint demosaicking-deblurring problem, we propose a new multiscale DCNN architecture consisting of a cascade of subnetworks so that the underlying blind deconvolution task can be broken into smaller subproblems and solved more effectively and robustly. In both cases extensive experiments are carried out. Experimental results demonstrate clear advantages of the proposed DCNN methods over existing ones. / Thesis / Master of Applied Science (MASc)
|
Page generated in 0.0546 seconds