Spelling suggestions: "subject:"image aprocessing - amathematical codels"" "subject:"image aprocessing - amathematical 2models""
21 |
A comparison of image processing algorithms for edge detection, corner detection and thinningParekh, Siddharth Avinash January 2004 (has links)
Image processing plays a key role in vision systems. Its function is to extract and enhance pertinent information from raw data. In robotics, processing of real-time data is constrained by limited resources. Thus, it is important to understand and analyse image processing algorithms for accuracy, speed, and quality. The theme of this thesis is an implementation and comparative study of algorithms related to various image processing techniques like edge detection, corner detection and thinning. A re-interpretation of a standard technique, non-maxima suppression for corner detectors was attempted. In addition, a thinning filter, Hall-Guo, was modified to achieve better results. Generally, real time data is corrupted with noise. This thesis also incorporates few smoothing filters that help in noise reduction. Apart from comparing and analysing algorithms for these techniques, an attempt was made to implement correlation-based optic flow
|
22 |
Image Based Attitude And Position Estimation Using Moment FunctionsMukundan, R 07 1900 (has links) (PDF)
No description available.
|
23 |
Applied color processingZhang, Heng 29 November 2011 (has links)
The quality of a digital image pipeline relies greatly on its color reproduction which should at a minimum handle the color constancy, and the final judgment of the excellence of the pipeline is made through subjective observations by humans.
This dissertation addresses a few topics surrounding the color processing of digital image pipelines from a practical point of view. Color processing fundamentals will be discussed in the beginning to form a background understanding for the topics that follow.A memory color assisted illuminant estimation algorithm is then introduced after a review of memory colors and some modeling techniques. Spectral sensitivity of the camera is required by many color constancy algorithms but such data is often not readily
available. To tackle this problem, an alternative method to the spectral characterization for color constancy parameter calibration is proposed. Hue control in color reproduction can be of great importance especially when memory colors are concerned. A hue
constrained matrix optimization algorithm is introduced to address this issue, followed by a psychophysical study to systematically arrive at a recommendation for the optimized preferred color reproduction. At the end, a color constancy algorithm for high dynamic range scenes observing multiple illuminants is proposed. / Graduation date: 2012
|
24 |
Bilateral and adaptive loop filter implementations in 3D-high efficiency video coding standardAmiri, Delaram 09 1900 (has links)
Indiana University-Purdue University Indianapolis (IUPUI) / In this thesis, we describe a different implementation for in loop filtering method for 3D-HEVC. First we propose the use of adaptive loop filtering (ALF) technique for 3D-HEVC standard in-loop filtering. This filter uses Wiener–based method to minimize the Mean Squared Error between filtered pixel and original pixels. The performance of adaptive loop filter in picture based level is evaluated. Results show up to of 0.2 dB PSNR improvement in Luminance component for the texture and 2.1 dB for the depth. In addition, we obtain up to 0.1 dB improvement in Chrominance component for the texture view after applying this filter in picture based filtering. Moreover, a design of an in-loop filtering with Fast Bilateral Filter for 3D-HEVC standard is proposed. Bilateral filter is a filter that smoothes an image while preserving strong edges and it can remove the artifacts in an image. Performance of the bilateral filter in picture based level for 3D-HEVC is evaluated. Test model HTM- 6.2 is used to demonstrate the results. Results show up to of 20 percent of reduction in processing time of 3D-HEVC with less than affecting PSNR of the encoded 3D video using Fast Bilateral Filter.
|
25 |
Probabilistic Multi-Compartment Deformable Model, Application to Cell SegmentationFarhand, Sepehr 12 July 2013 (has links)
Indiana University-Purdue University Indianapolis (IUPUI) / A crucial task in computer vision and biomedical image applications is to represent images in a numerically compact form for understanding, evaluating and/or mining their content. The fundamental step of this task is the segmentation of images into regions, given some homogeneity criteria, prior appearance and/or shape information criteria. Specifically, segmentation of cells in microscopic images is the first step in analyzing many biomedical applications. This thesis is a part of the project entitled "Construction and profiling of biodegradable cardiac patches for the co-delivery of bFGF and G-CSF growth factors" funded by National Institutes of Health (NIH). We present a method that simultaneously segments the population of cells while partitioning the cell regions into cytoplasm and nucleus in order to evaluate the spatial coordination on the image plane, density and orientation of cells. Having static microscopic images, with no edge information of a cytoplasm boundary and no time sequence constraints, traditional cell segmentation methods would not perform well. The proposed method combines deformable models with a probabilistic framework in a simple graphical model such that it would capture the shape, structure and appearance of a cell. The process aims at the simultaneous cell partitioning into nucleus and cytoplasm. We considered the relative topology of the two distinct cell compartments to derive a better segmentation and compensate for the lack of edge information. The framework is applied to static fluorescent microscopy, where the cultured cells are stained with calcein AM.
|
26 |
Efficient construction of multi-scale image pyramids for real-time embedded robot visionEntschev, Peter Andreas 16 December 2013 (has links)
Detectores de pontos de interesse, ou detectores de keypoints, têm sido de grande interesse para a área de visão robótica embarcada, especialmente aqueles que possuem robustez a variações geométricas, como rotação, transformações afins e mudanças em escala. A detecção de características invariáveis a escala é normalmente realizada com a construção de pirâmides de imagens em multiescala e pela busca exaustiva de extremos no espaço de escala, uma abordagem presente em métodos de reconhecimento de objetos como SIFT e SURF. Esses métodos são capazes de encontrar pontos de interesse bastante robustos, com propriedades adequadas para o reconhecimento de objetos, mas são ao mesmo tempo computacionalmente custosos. Nesse trabalho é apresentado um método eficiente para a construção de pirâmides de imagens em sistemas embarcados, como a plataforma BeagleBoard-xM, de forma similar ao método SIFT. O método aqui apresentado tem como objetivo utilizar técnicas computacionalmente menos custosas e a reutilização de informações previamente processadas de forma eficiente para reduzir a complexidade computacional. Para simplificar o processo de construção de pirâmides, o método utiliza filtros binomiais em substituição aos filtros Gaussianos convencionais utilizados no método SIFT original para calcular múltiplas escalas de uma imagem. Filtros binomiais possuem a vantagem de serem implementáveis utilizando notação ponto-fixo, o que é uma grande vantagem para muitos sistemas embarcados que não possuem suporte nativo a ponto-flutuante. A quantidade de convoluções necessária é reduzida pela reamostragem de escalas já processadas da pirâmide. Após a apresentação do método para construção eficiente de pirâmides, é apresentada uma maneira de implementação eficiente do método em uma plataforma SIMD (Single Instruction, Multiple Data, em português, Instrução Única, Dados Múltiplos) – a plataforma SIMD usada é a extensão ARM Neon disponível no processador ARM Cortex-A8 da BeagleBoard-xM. Plataformas SIMD em geral são muito úteis para aplicações multimídia, onde normalmente é necessário realizar a mesma operação em vários elementos, como pixels em uma imagem, permitindo que múltiplos dados sejam processados com uma única instrução do processador. Entretanto, a extensão Neon no processador Cortex-A8 não suporta operações em ponto-flutuante, tendo o método sido cuidadosamente implementado de forma a superar essa limitação. Por fim, alguns resultados sobre o método aqui proposto e método SIFT original são apresentados, incluindo seu desempenho em tempo de execução e repetibilidade de pontos de interesse detectados. Com uma implementação direta (sem o uso da plataforma SIMD), é mostrado que o método aqui apresentado necessita de aproximadamente 1/4 do tempo necessário para construir a pirâmide do método SIFT original, ao mesmo tempo em que repete até 86% dos pontos de interesse. Com uma abordagem completamente implementada em ponto-fixo (incluindo a vetorização com a plataforma SIMD) a repetibilidade chega a 92% dos pontos de interesse do método SIFT original, porém, reduzindo o tempo de processamento para menos de 3%. / Interest point detectors, or keypoint detectors, have been of great interest for embedded robot vision for a long time, especially those which provide robustness against geometrical variations, such as rotation, affine transformations and changes in scale. The detection of scale invariant features is normally done by constructing multi-scale image pyramids and performing an exhaustive search for extrema in the scale space, an approach that is present in object recognition methods such as SIFT and SURF. These methods are able to find very robust interest points with suitable properties for object recognition, but at the same time are computationally expensive. In this work we present an efficient method for the construction of SIFT-like image pyramids in embedded systems such as the BeagleBoard-xM. The method we present here aims at using computationally less expensive techniques and reusing already processed information in an efficient manner in order to reduce the overall computational complexity. To simplify the pyramid building process we use binomial filters instead of conventional Gaussian filters used in the original SIFT method to calculate multiple scales of an image. Binomial filters have the advantage of being able to be implemented by using fixed-point notation, which is a big advantage for many embedded systems that do not provide native floating-point support. We also reduce the amount of convolution operations needed by resampling already processed scales of the pyramid. After presenting our efficient pyramid construction method, we show how to implement it in an efficient manner in an SIMD (Single Instruction, Multiple Data) platform -- the SIMD platform we use is the ARM Neon extension available in the BeagleBoard-xM ARM Cortex-A8 processor. SIMD platforms in general are very useful for multimedia applications, where normally it is necessary to perform the same operation over several elements, such as pixels in images, enabling multiple data to be processed with a single instruction of the processor. However, the Neon extension in the Cortex-A8 processor does not support floating-point operations, so the whole method was carefully implemented to overcome this limitation. Finally, we provide some comparison results regarding the method we propose here and the original SIFT approach, including performance regarding execution time and repeatability of detected keypoints. With a straightforward implementation (without the use of the SIMD platform), we show that our method takes approximately 1/4 of the time taken to build the entire original SIFT pyramid, while repeating up to 86% of the interest points found with the original method. With a complete fixed-point approach (including vectorization within the SIMD platform) we show that repeatability reaches up to 92% of the original SIFT keypoints while reducing the processing time to less than 3%.
|
27 |
Comparação entre métodos delay-and-sum e f-k migration para reconstrução de imagens Doppler por ultrassom / Comparison between f-k migration and delay-and-sum methods for ultrasound Doppler imagingGranado, Diogo Wachtel 15 December 2017 (has links)
Conselho Nacional do Desenvolvimento Científico e Tecnológico (CNPq) / Coordenação de Aperfeiçoamento de Pessoal de Nível Superior (CAPES) / Financiadora de Estudos e Projetos (FINEP) / Fundação Araucária de Apoio ao Desenvolvimento Científico e Tecnológico do Paraná / Universidade Tecnológica Federal do Paraná (UTFPR) / Os sistemas de ultrassom para imagens médicas sempre estão em evolução. Na área de imagem Doppler, em que se pode observar o movimento do objeto, principalmente o fluxo sanguíneo, encontram-se algumas técnicas para obtenção de melhor qualidade de imagem. Os principais problemas da técnica Doppler são a complexidade e a quantidade de dados a serem processados para a formação das imagens. Este trabalho buscou avaliar dois métodos para processamento de imagens Doppler. Inicialmente, foram realizados estudos com o beamforming tradicional gerado pela plataforma de pesquisas ULTRA-ORS no laboratório de ultrassom. Após, foram realizados estudos com o equipamento Verasonics Vantage™ utilizando geração de onda de ultrassom planas, plane-wave, com um transdutor linear modelo L11-4v de 128 elementos. Os phantoms utilizados foram o CIRS Doppler String Phantom model 043 e o Doppler Flow Phantoms & Pumping Systems da ATS. Foram implementados algoritmos para reconstrução das imagens em modo B no Matlab® utilizando os métodos delay-and-sum e f-k migration, com a geração de imagens Doppler colorido e Power Doppler. Os dados para geração de imagem modo B com plane-wave foram adquiridos com variação de 1 a 75 ângulos, na faixa entre -8,88° e 8,88°, com passo de 0,24°. Os resultados obtidos com o f-k migration apresentaram maior resolução, com erros de 1,0 % e 0,8 % para as resoluções lateral e axial, respectivamente, enquanto o método DAS apresentou erros de 12,0 % para resolução lateral e 10,0 % para resolução axial. Para geração das imagens Doppler com plane-wave os dados foram adquiridos com variação de 1 a 21 ângulos, na faixa entre -15,0° e 15,0°, com passo de 1,5°. A estimação da velocidade através da técnica Doppler apresentou melhores resultados utilizando-se o método DAS, com erro de 8,3 %, enquanto o método f-k migration apresentou erro de 16,6 %. Analisando-se os resultados obtidos, foi possível verificar que a técnica plane-wave permite a geração de imagens com maior taxa de quadros por segundo do que os métodos tradicionais. Também pode se observar que o método f-k migration gera imagens de maior qualidade com menor número de ângulos de aquisição, cerca de 9 ângulos, porém possui pior desempenho para geração de imagens Doppler quando comparado ao DAS. / The medical ultrasound equipment is always evolving. In the field of Doppler imaging, which object movement and mainly blood flow of vessels can be measured, there are some techniques to improve image quality. The main problems of the Doppler technique are the complexity and the amount of data to be processed for the image reconstruction. The aim of this work was to evaluate two methods for Doppler images processing. Initially, studies were carried out with the traditional beamforming technique, generated by the research platform ULTRA-ORS in the ultrasound laboratory. Then, with the Verasonics Vantage™ equipment, it was generated ultrasound plane waves with a linear transducer L11-4v of 128 elements. Two Doppler phantoms were used, the CIRS Doppler String Phantom model 043 and the Doppler Flow Phantoms & Pumping Systems from ATS. Algorithms for B mode image reconstruction were developed in Matlab® using the methods Delay-and-Sum and f-k Migration to generate Color Doppler and Power Doppler images. The B mode images with plane-wave were generated from the data acquired with 1 to 75 angles, ranging from -8.88° to 8.88° and 0.24º step. The f-k migration’s results presented higher resolutions than DAS method, with errors of 1.0 % and 0.8 % for lateral and axial resolutions, respectively, while the DAS method presented errors of 12.0 % for lateral resolution and 10.0 % for axial resolution. The data for color Doppler images with plane-wave generation were acquired with 1 to 21 angles, ranging from -15.0° to 15.0°and 1.5° step. The Doppler velocity estimation using the DAS method showed better results (error of 8.3 %) than the f-k migration (error of 16.6 %). Analyzing the obtained results, it was possible to see that the plane wave imaging technique allows the improvement of the frame-rate, being faster than traditional methods. Additionally, it was verified that f-k migration method produces images with better quality using less steering angles, approximately 9 angles, but it shows worse performance when generating Doppler images.
|
28 |
Active geometric model : multi-compartment model-based segmentation & registrationMukherjee, Prateep 26 August 2014 (has links)
Indiana University-Purdue University Indianapolis (IUPUI) / We present a novel, variational and statistical approach for model-based segmentation. Our model generalizes the Chan-Vese model, proposed for concurrent segmentation of multiple objects embedded in the same image domain. We also propose a novel shape descriptor, namely the Multi-Compartment Distance Functions or mcdf. Our proposed framework for segmentation is two-fold: first, several training samples distributed across various classes are registered onto a common frame of reference; then, we use a variational method similar to Active Shape Models (or ASMs) to generate an average shape model and hence use the latter to partition new images. The key advantages of such a framework is: (i) landmark-free automated shape training; (ii) strict shape constrained model to fit test data. Our model can naturally deal with shapes of arbitrary dimension and topology(closed/open curves). We term our model Active Geometric Model, since it focuses on segmentation of geometric shapes. We demonstrate the power of the proposed framework in two important medical applications: one for morphology estimation of 3D Motor Neuron compartments, another for thickness estimation of Henle's Fiber Layer in the retina. We also compare the qualitative and quantitative performance of our method with that of several other state-of-the-art segmentation methods.
|
Page generated in 0.1027 seconds