Spelling suggestions: "subject:"image"" "subject:"lmage""
741 |
An experimental study of a binocular vision system for rough terrain locomotion of a hexapod walking robot /Tsai, Sheng-Jen January 1983 (has links)
No description available.
|
742 |
Plant identification using color co-occurrence matrices derived from digitized images /Shearer, Scott A. January 1987 (has links)
No description available.
|
743 |
An exploration of developmental relationships between children's body image boundaries, estimates of dimensions of body space, and performance of selected gross motor tasks /Woods, Marcella Darlene January 1967 (has links)
No description available.
|
744 |
An investigation of Boolean image neighborhood transformations /Miller, Peter Edwin January 1978 (has links)
No description available.
|
745 |
Image Analysis and Segmentation Based on the Circular Pipeline Video ProcessorAlbritton, Jon M. 01 January 1984 (has links) (PDF)
Visual inspection of printed circuit boards has generally depended on human inspectors. However, a system has been developed which allows for automated visual inspection using robotics and modern image processing techniques. This paper first introduces automatic visual inspection processes, overviews the Automatic Board Assembly, Inspection and Test (ABAIT) system, reviews image processing concepts and describes the Circular Pipeline Video Processor (CPVP). Image data from the CPVP is analyzed and an investigation into alternate segmentation algorithms to identify circuit board features is presented. The relative performance of these algorithms is compared conclusions drawn.
|
746 |
Fast Screening Algorithm for Template MatchingLiu, Bolin January 2017 (has links)
This paper presents a generic pre-processor for expediting
conventional template matching techniques. Instead of locating the
best matched patch in the reference image to a query template via
exhaustive search, the proposed algorithm rules out regions with no
possible matches with minimum computational efforts. While working
on simple patch features, such as mean, variance and gradient, the
fast pre-screening is highly discriminative. Its computational
efficiency is gained by using a novel octagonal-star-shaped template
and the inclusion-exclusion principle to extract and compare patch
features. Moreover, it can handle arbitrary rotation and scaling of
reference images effectively, and also be robust to uniform
illumination changes. GPU-aided implementation shows great efficiency
of parallel computing in the algorithm design, and extensive
experiments demonstrate that the proposed algorithm greatly reduces
the search space while never missing the best match. / Thesis / Master of Applied Science (MASc)
|
747 |
Bad Weather Effect Removal in Images and VideosKan, Pengfei January 2018 (has links)
Commonly experienced bad weather conditions like fog, snow and rain generate pixel intensity changes in images and videos taken in outdoor environment and impair the performance of algorithms in outdoor vision systems. Hence, the impact of bad weather conditions need to be processed to improve the performance of outdoor vision systems.
This thesis focuses on three most common weather conditions: fog, snow and rain. Their physical properties are first analyzed. Based on their properties, traditional methods are introduced individually to remove these weather conditions' effect on images or videos. For fog removal, the scattering model is used to describe the fog scene in images and estimate the clear scene radiance from single input images. In this thesis two scenario are discussed, one with videos and the other with single images. The removal of snow and rain in videos is easier than in single images. In videos, temporal and chromatic properties of snow and rain can be used to remove their impact. While in single images, traditional methods with edge preserving filters were discussed.
However, there are multiple limitations of traditional methods that are based on physical properties of bad weather conditions. Each of them can only deal with one specific weather condition at a time. In real application scenarios, it is difficult for vision systems to recognize different weather conditions and choose corresponding methods to remove them. Therefore, machine learning methods have advantages compared with traditional methods. In this thesis, Generative Adversarial Network (GAN) is used to remove the effect of these weather conditions. GAN performs the image to image translation instead of analyzing the physical properties of different weather conditions. It gets impressive results to deal with different weather conditions. / Thesis / Master of Applied Science (MASc)
|
748 |
IMAGE RESTORATIONS USING DEEP LEARNING TECHNIQUESChi, Zhixiang January 2018 (has links)
Conventional methods for solving image restoration problems are typically built on an image degradation model and on some priors of the latent image. The model of the degraded image and the prior knowledge of the latent image are necessary because the restoration is an ill posted inverse problem. However, for some applications, such as those addressed in this thesis, the image degradation process is too complex to model precisely; in addition, mathematical priors, such as low rank and sparsity of the image signal, are often too idealistic for real world images. These difficulties limit the performance of existing image restoration algorithms, but they can be, to certain extent, overcome by the techniques of machine learning, particularly deep convolutional neural networks. Machine learning allows large sample statistics far beyond what is available in a single input image to be exploited. More importantly, the big data can be used to train deep neural networks to learn the complex non-linear mapping between the degraded and original images. This circumvents the difficulty of building an explicit realistic mathematical model when the degradation causes are complex and compounded.
In this thesis, we design and implement deep convolutional neural networks (DCNN) for two challenging image restoration problems: reflection removal and joint demosaicking-deblurring. The first problem is one of blind source separation; its DCNN solution requires a large set of paired clean and mixed images for training. As these paired training images are very difficult, if not impossible, to acquire in the real world, we develop a novel technique to synthesize the required training images that satisfactorily approximate the real ones. For the joint demosaicking-deblurring problem, we propose a new multiscale DCNN architecture consisting of a cascade of subnetworks so that the underlying blind deconvolution task can be broken into smaller subproblems and solved more effectively and robustly. In both cases extensive experiments are carried out. Experimental results demonstrate clear advantages of the proposed DCNN methods over existing ones. / Thesis / Master of Applied Science (MASc)
|
749 |
Generic Model-Agnostic Convolutional Neural Networks for Single Image DehazingLiu, Zheng January 2018 (has links)
Haze and smog are among the most common environmental factors impacting image quality and, therefore, image analysis. In this paper, I propose an end-to-end generative method for single image dehazing problem. It is based on fully convolutional network and effective network structures to recognize haze structure in input images and restore clear, haze-free ones. The proposed method is agnostic in the sense that it does not explore the atmosphere scattering model, it makes use of convolutional networks advantage in feature extraction and transfer instead. Somewhat surprisingly, it achieves superior performance relative to all existing state-of-the-art methods for image dehazing even on SOTS outdoor images, which are synthesized using the atmosphere scattering model. In order to improve its weakness in indoor hazy images and enhance the dehazed image's visual quality, a lightweight parallel network is put forward. It employs a different convolution strategy that extracts features with larger reception field to generate a complementary image. With the help of a parallel stream, the fusion of the two outputs performs better in PSNR and SSIM than other methods. / Thesis / Master of Applied Science (MASc)
|
750 |
GridDehazeNet: Attention-Based Multi-Scale Network for Image DehazingMa, Yongrui January 2019 (has links)
We propose an end-to-end trainable Convolutional Neural Network (CNN), named GridDehazeNet, for single image dehazing. The GridDehazeNet consists of three modules: pre-processing, backbone, and post-processing. The trainable pre-processing module can generate learned inputs with better diversity and more pertinent features as compared to those derived inputs produced by hand-selected pre-processing methods. The backbone module implements a novel attention-based multi-scale estimation on a grid network, which can effectively alleviate the bottleneck issue often encountered in the conventional multi-scale approach. The post-processing module helps to reduce the artifacts in the final output. Experimental results indicate that the GridDehazeNet outperforms the state-of-the-art on both synthetic and real-world images. The proposed hazing method does not rely on the atmosphere scattering model and we provide an explanation as to why it is not necessarily beneficial to take advantage of the dimension reduction offered by the atmosphere scattering model for image dehazing, even if only the dehazing results on synthetic images are concerned. / Thesis / Master of Applied Science (MASc)
|
Page generated in 0.0418 seconds