Spelling suggestions: "subject:"image filtering"" "subject:"image iltering""
1 |
A dedicated computer vision system for dimensional inspection of engineering componentsWang, Jaiwei January 1997 (has links)
No description available.
|
2 |
Context Dependent Thresholding and Filter Selection for Optical Character RecognitionKieri, Andreas January 2012 (has links)
Thresholding algorithms and filters are of great importance when utilizing OCR to extract information from text documents such as invoices. Invoice documents vary greatly and since the performance of image processing methods when applied to those documents will vary accordingly, selecting appropriate methods is critical if a high recognition rate is to be obtained. This paper aims to determine if a document recognition system that automatically selects optimal processing methods, based on the characteristics of input images, will yield a higher recognition rate than what can be achieved by a manual choice. Such a recognition system, including a learning framework for selecting optimal thresholding algorithms and filters, was developed and evaluated. It was established that an automatic selection will ensure a high recognition rate when applied to a set of arbitrary invoice images by successfully adapting and avoiding the methods that yield poor recognition rates.
|
3 |
A Case Study of Parallel Bilateral Filtering on the GPULarsson, Jonas January 2015 (has links)
Smoothing and noise reduction of images is often an important first step in image processing applications. Simple image smoothing algorithms like the Gaussian filter have the unfortunate side effect of blurring the image which could obfuscate important information and have a negative impact on the following applications. The bilateral filter is a well-used non-linear smoothing algorithm that seeks to preserve edges and contours while removing noise. The bilateral filter comes at a heavy cost in computational speed, especially when used on larger images, since the algorithm does a greater amount of work for each pixel in the image than some simpler smoothing algorithms. In applications where timing is important, this may be enough to encourage certain developers to choose a simpler filter, at the cost of quality. However, the time cost of the bilateral filter can be greatly reduced through parallelization, as the work for each pixel can theoretically be done simultaneously. This work uses Nvidia’s Compute Unified Device Architecture (CUDA) to implement and evaluate some of the most common and effective methods for parallelizing the bilateral filter on a Graphics processing unit (GPU). This includes use of the constant and shared memories, and a technique called 1 x N tiling. These techniques are evaluated on newer hardware and the results are compared to a sequential version, and a naive parallel version not using advanced techniques. This report also intends to give a detailed and comprehensible explanation to these techniques in the hopes that the reader may be able to use the information put forth to implement them on their own. The greatest speedup is achieved in the initial parallelizing step, where the algorithm is simply converted to run in parallel on a GPU. Storing some data in the constant memory provides a slight but reliable speedup for a small amount of work. Additional time can be gained by using shared memory. However, memory transactions did not account for as much of the execution time as was expected, and therefore the memory optimizations only yielded small improvements. Test results showed 1 x N tiling to be mostly non-beneficial for the hardware that was used in this work, but there might have been problems with the implementation.
|
4 |
Statistical analysis applied to data classification and image filteringALMEIDA, Marcos Antonio Martins de 21 December 2016 (has links)
Submitted by Fernanda Rodrigues de Lima (fernanda.rlima@ufpe.br) on 2018-08-03T20:52:13Z
No. of bitstreams: 2
license_rdf: 811 bytes, checksum: e39d27027a6cc9cb039ad269a5db8e34 (MD5)
TESE Marcos Antonio Martins de Almeida.pdf: 11555397 bytes, checksum: db589d39915a5dda1d8b9e763a9cf4c0 (MD5) / Approved for entry into archive by Alice Araujo (alice.caraujo@ufpe.br) on 2018-08-09T20:49:00Z (GMT) No. of bitstreams: 2
license_rdf: 811 bytes, checksum: e39d27027a6cc9cb039ad269a5db8e34 (MD5)
TESE Marcos Antonio Martins de Almeida.pdf: 11555397 bytes, checksum: db589d39915a5dda1d8b9e763a9cf4c0 (MD5) / Made available in DSpace on 2018-08-09T20:49:01Z (GMT). No. of bitstreams: 2
license_rdf: 811 bytes, checksum: e39d27027a6cc9cb039ad269a5db8e34 (MD5)
TESE Marcos Antonio Martins de Almeida.pdf: 11555397 bytes, checksum: db589d39915a5dda1d8b9e763a9cf4c0 (MD5)
Previous issue date: 2016-12-21 / Statistical analysis is a tool of wide applicability in several areas of scientific knowledge. This thesis makes use of statistical analysis in two different applications: data classification and image processing targeted at document image binarization. In the first case, this thesis presents an analysis of several aspects of the consistency of the classification of the senior researchers in computer science of the Brazilian research council, CNPq - Conselho Nacional de Desenvolvimento Científico e Tecnológico. The second application of statistical analysis developed in this thesis addresses filtering-out the back to front interference which appears whenever a document is written or typed on both sides of translucent paper. In this topic, an assessment of the most important algorithms found in the literature is made, taking into account a large quantity of parameters such as the strength of the back to front interference, the diffusion of the ink in the paper, and the texture and hue of the paper due to aging. A new binarization algorithm is proposed, which is capable of removing the back-to-front noise in a wide range of documents. Additionally, this thesis proposes a new concept of “intelligent” binarization for complex documents, which besides text encompass several graphical elements such as figures, photos, diagrams, etc. / Análise estatística é uma ferramenta de grande aplicabilidade em diversas áreas do conhecimento científico. Esta tese faz uso de análise estatística em duas aplicações distintas: classificação de dados e processamento de imagens de documentos visando a binarização. No primeiro caso, é aqui feita uma análise de diversos aspectos da consistência da classificação de pesquisadores sêniores do CNPq - Conselho Nacional de Desenvolvimento Científico e Tecnológico, na área de Ciência da Computação. A segunda aplicação de análise estatística aqui desenvolvida trata da filtragem da interferência frente-verso que surge quando um documento é escrito ou impresso em ambos os lados da folha de um papel translúcido. Neste tópico é inicialmente feita uma análise da qualidade dos mais importantes algoritmos de binarização levando em consideração parâmetros tais como a intensidade da interferência frente-verso, a difusão da tinta no papel e a textura e escurecimento do papel pelo envelhecimento. Um novo algoritmo para a binarização eficiente de documentos com interferência frente-verso é aqui apresentado, tendo se mostrado capaz de remover tal ruído em uma grande gama de documentos. Adicionalmente, é aqui proposta a binarização “inteligente” de documentos complexos que envolvem diversos elementos gráficos (figuras, diagramas, etc).
|
5 |
Image Filtering Methods for Biomedical ApplicationsNiazi, M. Khalid Khan January 2011 (has links)
Filtering is a key step in digital image processing and analysis. It is mainly used for amplification or attenuation of some frequencies depending on the nature of the application. Filtering can either be performed in the spatial domain or in a transformed domain. The selection of the filtering method, filtering domain, and the filter parameters are often driven by the properties of the underlying image. This thesis presents three different kinds of biomedical image filtering applications, where the filter parameters are automatically determined from the underlying images. Filtering can be used for image enhancement. We present a robust image dependent filtering method for intensity inhomogeneity correction of biomedical images. In the presented filtering method, the filter parameters are automatically determined from the grey-weighted distance transform of the magnitude spectrum. An evaluation shows that the filter provides an accurate estimate of intensity inhomogeneity. Filtering can also be used for analysis. The thesis presents a filtering method for heart localization and robust signal detection from video recordings of rat embryos. It presents a strategy to decouple motion artifacts produced by the non-rigid embryonic boundary from the heart. The method also filters out noise and the trend term with the help of empirical mode decomposition. Again, all the filter parameters are determined automatically based on the underlying signal. Transforming the geometry of one image to fit that of another one, so called image registration, can be seen as a filtering operation of the image geometry. To assess the progression of eye disorder, registration between temporal images is often required to determine the movement and development of the blood vessels in the eye. We present a robust method for retinal image registration. The method is based on particle swarm optimization, where the swarm searches for optimal registration parameters based on the direction of its cognitive and social components. An evaluation of the proposed method shows that the method is less susceptible to becoming trapped in local minima than previous methods. With these thesis contributions, we have augmented the filter toolbox for image analysis with methods that adjust to the data at hand.
|
6 |
An analytic approach to tensor scale with efficient computational solution and applications to medical imagingXu, Ziyue 01 May 2012 (has links)
Scale is a widely used notion in medical image analysis that evolved in the form of scale-space theory where the key idea is to represent and analyze an image at various resolutions. Recently, a notion of local morphometric scale referred to as "tensor scale" was introduced using an ellipsoidal model that yields a unified representation of structure size, orientation and anisotropy. In the previous work, tensor scale was described using a 2-D algorithmic approach and a precise analytic definition was missing. Also, with previous framework, 3-D application is not practical due to computational complexity. The overall aim of the Ph.D. research is to establish an analytic definition of tensor scale in n-dimensional (n-D) images, to develop an efficient computational solution for 2- and 3-D images and to investigate its role in various medical imaging applications including image interpolation, filtering, and segmentation. Firstly, an analytic definition of tensor scale for n-D images consisting of objects formed by pseudo-Riemannian partitioning manifolds has been formulated. Tensor scale captures contextual structural information which is useful in local structure-adaptive anisotropic parameter control and local structure description for object/image matching. Therefore, it is helpful in a wide range of medical imaging algorithms and applications. Secondly, an efficient computational solution of tensor scale for 2- and 3-D images has been developed. The algorithm has combined Euclidean distance transform and several novel differential geometric approaches. The accuracy of the algorithm has been verified on both geometric phantoms and real images compared to the theoretical results generated using brute-force method. Also, a matrix representation has been derived facilitating several operations including tensor field smoothing to capture larger contextual knowledge. Thirdly, an inter-slice interpolation algorithm using 2-D tensor scale information of adjacent slices has been developed to determine the interpolation line at each image location in a gray level image. Experimental results have established the superiority of the tensor scale based interpolation method as compared to existing interpolation algorithms. Fourthly, an anisotropic diffusion filtering algorithm based on tensor scale has been developed. The method made use of tensor scale to design the conductance function for diffusion process so that along structure diffusion is encouraged and boundary sharpness is preserved. The performance has been tested on phantoms and medical images at various noise levels and the results were quantitatively compared with conventional gradient and structure tensor based algorithms. The experimental results formed are quite encouraging. Also, a tensor scale based n-linear interpolation method has been developed where the weights of neighbors were locally tuned based on local structure size and orientation. The method has been applied on several phantom and real images and the performance has been evaluated in comparison with standard linear interpolation and windowed Sinc interpolation methods. Experimental results have shown that the method helps to generate more precise structure boundaries without causing ringing artifacts. Finally, a new anisotropic constrained region growing method locally controlled by tensor scale has been developed for vessel segmentation that encourages axial region growing while arresting cross-structure leaking. The method has been successfully applied on several non-contrast pulmonary CT images. The accuracy of the new method has been evaluated using manually selection and the results found are very promising.
|
7 |
Hiperbolinio vaizdų filtravimo skirtingo matavimo erdvėse analizė / Analysis of hyperbolic image filtering in spaces of different dimensionalityPuida, Mantas 27 May 2004 (has links)
This Master degree paper analyses hyperbolic image filtering in spaces of different dimensionality. It investigates the problem of optimal filtering space selection. Several popular image compression methods (both lossless and lossy) are reviewed. This paper analyses the problems of image smoothness parameter discovering, image dimensionality changing, hyperbolic image filtering and filtering efficiency evaluation and provides the solution methods of the problems. Schemes for the experimental examination of theoretical propositions and hypotheses are prepared. This paper comprehensively describes experiments with one-, two- and threedimensional images and the results of the experiments. Conclusions about the efficiency of hyperbolic image filtering in other than "native" image space are based on the results of the experiments. The criterion for the selection of optimal image filtering space is evaluated. Guidelines for further research are also discussed. The presentation Specific Features of Hyperbolic Image Filtering, which was based on this Master degree paper, was made at the conference Mathematics and Mathematical Modeling (KTU – 2004). This text is available in appendixes.
|
8 |
A study of some morphological operators in simplicial complex spacesSalve Dias, Fabio Augusto 21 September 2012 (has links) (PDF)
In this work we study the framework of mathematical morphology on simplicial complex spaces. Simplicial complexes are a versatile and widely used structure to represent multidimensional data, such as meshes, that are tridimensional complexes, or graphs, that can be interpreted as bidimensional complexes. Mathematical morphology is one of the most powerful frameworks for image processing, including the processing of digital structures, and is heavily used for many applications. However, mathematical morphology operators on simplicial complex spaces is not a concept fully developped in the literature. In this work, we review some classical operators from simplicial complexes under the light of mathematical morphology, to show that they are morphology operators. We define some basic lattices and operators acting on these lattices: dilations, erosions, openings, closings and alternating sequential filters, including their extension to weighted simplexes. However, the main contributions of this work are what we called dimensional operators, small, versatile operators that can be used to define new operators on simplicial complexes, while mantaining properties from mathematical morphology. These operators can also be used to express virtually any operator from the literature. We illustrate all the defined operators and compare the alternating sequential filters against filters defined in the literature, where our filters show better results for removal of small, intense, noise from binary images
|
9 |
Example-guided image editing / Édition d'image guidée par exempleHristova, Hristina 20 October 2017 (has links)
Les contributions de cette thèse sont divisées en trois parties principales. Dans la partie 1, nous proposons une méthode locale utilisant une distribution GGM pour approcher les distributions des images en les subdivisant en groupe de pixels que nous appelons dorénavant clusters. L'idée principale consiste à déterminer quelle caractéristique (couleur, luminance) est plus représentative pour une image donnée. Puis nous utilisons cette caractéristique pour subdiviser l'image en clusters. Quatre stratégies de mise en correspondance des clusters de l'image d'entrée avec ceux de l'image cible sont proposées. Ces stratégies ont pour but de produire des images photoréalistes dont le style ressemble à celui de l'image cible (dans notre cas le style d'une image est défini en termes de couleur et luminosité). Nous étendons le principe de transfert de couleur au transfert simultané de couleur et de gradient. Afin de pouvoir décrire las distributions de couleur et de gradient par une seule distribution, nous adoptons le modèle MGGD (multivariate generalized Gaussian distributions). Nous proposons une nouvelle transformation de distribution MGGD pour des applications de traitement d'image telles que le transfert multi-dimensionnel de caractéristiques d'image, de couleur, etc. De plus, nous adoptons aussi un modèle de distribution plus précis (distribution Beta bornée) pour représenter des distributions de couleur et de luminosité. Nous proposons une transformation de distribution Beta qui permet d'effectuer un transfert de couleur entre images et qui s'avère plus performante que celles basées sur les distributions Gaussiennes. Dans la partie 2, nous introduisons une nouvelle méthode permettant de créer des images HDR à partir d'une paire d'images, l'une prise avec flash et l'autre pas. Notre méthode consiste en l'utilisation d'une fonction de luminosité (brightness) simulant la fonction de réponse d'une caméra, et d'une nouvelle fonction d'adaptation de couleur (CAT), appelée CAT bi-locale (bi-local CAT), permettant de reproduire les détails de l'image flash. Cette approche évite toutes les limitations inhérentes aux méthodes classiques de création d'images HDR. Dans la partie 3, nous exploitons le potentiel de notre adaptation bi-locale CAT pour diverses applications d'édition d'image telles que la suppression de bruit (dé-bruitage), suppression de flou, transfert de texture, etc. Nous introduisons notre nouveau filtre guidé dans lequel nous incorporons l'adaptation bi-locale CAT dans la partie 3. / This thesis addresses three main topics from the domain of image processing, i.e. color transfer, high-dynamic-range (HDR) imaging and guidance-based image filtering. The first part of this thesis is dedicated to color transfer between input and target images. We adopt cluster-based techniques and apply Gaussian mixture models to carry out a more precise color transfer. In addition, we propose four new mapping policies to robustly portray the target style in terms of two key features: color, and light. Furthermore, we exploit the properties of the multivariate generalized Gaussian distributions (MGGD). in order to transfer an ensemble of features between images simultaneously. The multi-feature transfer is carried out using our novel transformation of the MGGD. Despite the efficiency of the proposed MGGD transformation for multi-feature transfer, our experiments have shown that the bounded Beta distribution provides a much more precise model for the color and light distributions of images. To exploit this property of the Beta distribution, we propose a new color transfer method, where we model the color and light distributions by the Beta distribution and introduce a novel transformation of the Beta distribution. The second part of this thesis focuses on HDR imaging. We introduce a method for automatic creation of HDR images from only two images - flash and non-flash images. We mimic the camera response function by a brightness function and we recover details from the flash image using our new chromatic adaptation transform (CAT), called bi-local CAT. That way, we efficiently recover the dynamic range of the real-world scenes without compromising the quality of the HDR image (as our method is robust to misalignment). In the context of the HDR image creation, the bi-local CAT recovers details from the flash image, removes flash shadows and reflections. In the last part of this thesis, we exploit the potential of the bi-local CAT for various image editing applications such as image de-noising, image de-blurring, texture transfer, etc. We propose a novel guidance-based filter in which we embed the bi-local CAT. The proposed filter performs as good as (and for certain applications even better than) state-of-the art methods.
|
10 |
Metody pro odstranění šumu z digitálních obrazů / Digital Image Noise Reduction MethodsČišecký, Roman January 2012 (has links)
The master's thesis is concerned with digital image denoising methods. The theoretical part explains some elementary terms related to image processing, image noise, categorization of noise and quality determining criteria of denoising process. There are also particular denoising methods described, mentioning their advantages and disadvantages in this paper. The practical part deals with an implementation of the selected denoising methods in a Java, in the environment of application RapidMiner. In conclusion, the results obtained by different methods are compared.
|
Page generated in 0.0928 seconds