• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 136
  • 24
  • 20
  • 16
  • 10
  • 4
  • 1
  • 1
  • 1
  • 1
  • Tagged with
  • 250
  • 112
  • 52
  • 52
  • 47
  • 42
  • 38
  • 33
  • 30
  • 28
  • 25
  • 25
  • 23
  • 23
  • 22
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
61

Speckle image denoising methods based on total variation and non-local means

Jones, Chartese 01 May 2020 (has links)
Speckle noise occurs in a wide range of images due to sampling and digital degradation. Understanding how noise can be present in images have led to multiple denoising techniques. Most of these denoising techniques assume equal noise distribution. When the noise present in the image is not uniform, the resulting denoised image becomes less than the highest standard or quality. For this research, we will be focusing on speckle noise. Unlike Gaussian noise, which affects single pixels on an image, speckle noise affects multiple pixels. Hence it is not possible to remove speckle noise with the traditional gaussian denoising model. We develope a more accurate speckle denoising model and its stable numerical methods. This model is based on the TV minimization and the associated non-linear PDE and Krissian $et$ $al$.'s speckle noise equation model. A realistic and efficient speckle noise equation model was introduced with an edge enhancing feature by adopting a non-convex functional. An effective numerical scheme was introduced and its stability was proved. Also, while working with TV minimization for non-linear PDE and Krissian $et$ $al$ we used a dual approach for faster computation. This work is based on Chambolle's approach for image denoising. The NLM algorithm takes advantage of the high degree of redundancy of any natural image. Also, the NLM algorithm is very accurate since all pixels contribute for denoising at any given pixel. However, due to non-local averaging, one major drawback is computational cost. For this research, we will discuss new denoising techniques based on NLM and total variation for images contaminated by speckle noise. We introduce blockwise and selective denoising methods based on NLM technique and Partial Differential Equations (PDEs) methods for total variation to enhance computational efficiency. Our PDE methods have shown to be very computational efficient and as mentioned before the NLM process is very accurate.
62

Bayesian inference and wavelet methods in image processing

Silwal, Sharad Deep January 1900 (has links)
Master of Science / Department of Statistics / Diego M. Maldonado / Haiyan Wang / This report addresses some mathematical and statistical techniques of image processing and their computational implementation. Fundamental theories have been presented, applied and illustrated with examples. To make the report as self-contained as possible, key terminologies have been defined and some classical results and theorems are stated, in the most part, without proof. Some algorithms and techniques of image processing have been described and substantiated with experimentation using MATLAB. Several ways of estimating original images from noisy image data and their corresponding risks are discussed. Two image processing concepts selected to illustrate computational implementation are: "Bayes classification" and "Wavelet denoising". The discussion of the latter involves introducing a specialized area of mathematics, namely, wavelets. A self-contained theory for wavelets is built by first reviewing basic concepts of Fourier Analysis and then introducing Multi-resolution Analysis and wavelets. For a better understanding of Fourier Analysis techniques in image processing, original solutions to some problems in Fourier Analysis have been worked out. Finally, implementation of the above-mentioned concepts are illustrated with examples and MATLAB codes.
63

Application of Persistent Homology in Signal and Image Denoising

Zheng, Yi 12 June 2015 (has links)
No description available.
64

Denoising of Carpal Bones for Computerised Assessment of Bone Age

O'Keeffe, Darin January 2010 (has links)
Bone age assessment is a method of assigning a level of biological maturity to a child. It is usually performed either by comparing an x-ray of a child's left hand and wrist with an atlas of known bones, or by analysing specific features of bones such as ratios of width to height, or the degree of overlap with other bones. Both methods of assessment are labour intensive and prone to both inter- and intra-observer variability. This is motivation for developing a computerised method of bone age assessment. The majority of research and development on computerised bone age assessment has focussed on analysing the bones of the hand. The wrist bones, especially the carpal bones, have received far less attention and have only been analysed in young children in which there is clear separation of the bones. An argument is presented that the evidence for excluding the carpal bones from computerised bone age assessment is weak and that research is required to identify the role of carpal bones in the computerised assessment of bone age for children over eight years of age. Computerised analysis of the carpal bones in older children is a difficult computer vision problem plagued by radiographic noise, poor image contrast, and especially poor definition of bone contours. Traditional image processing methods such as region growing fail and even the very successful Canny linear edge detector can only find the simplest of bone edges in these images. The field of partial differential equation-based image processing provides some possible solutions to this problem, such as the use of active contour models to impose constraints upon the contour continuity. However, many of these methods require regularisation to achieve unique and stable solutions. An important part of this regularisation is image denoising. Image denoising was approached through development of a noise model for the Kodak computed radiography system, estimation of noise parameters using a robust estimator of noise per pixel intensity bin, and incorporation of the noise model into a denoising method based on oriented Laplacians. The results for this approach only showed a marginal improvement when using the signal-dependent noise model, although this likely reflects how the noise characteristics were incorporated into the anisotropic diffusion method, rather than the principle of this approach. Even without the signal-dependent noise term the oriented Laplacians denoising of the hand-wrist radiographs was very effective at removing noise and preserving edges.
65

Odstranění rozmazání pomocí dvou snímků s různou délkou expozice / Odstranění rozmazání pomocí dvou snímků s různou délkou expozice

Sabo, Jozef January 2011 (has links)
In the presented work we study the methods of image deblurring using two images of the same scene with different exposure times, focusing on two main approach categories, so called deconvolution and non-deconvolution methods. We present theoretical backgrounds on both categories and evaluate their limitations and advantages. We dedicate one section to compare both method categories on test data (images) for which we our MATLAB implementation of the methods. We also compare the effectiveness of said methods against the results of a selected single-image de-noising algorithm. We do not focus at computational efficiency of algorithms and work with single-channel images only.
66

Processus alpha-stables pour le traitement du signal / Alpha-stable processes for signal processing

Fontaine, Mathieu 12 June 2019 (has links)
En traitement du signal audio, le signal observé est souvent supposé être égal à la somme des signaux que nous souhaitons obtenir. Dans le cadre d'une modélisation probabiliste, il est alors primordial que les processus stochastiques préservent leur loi par sommation. Le processus le plus employé et vérifiant cette stabilité est le processus gaussien. Comparé aux autres processus α - stables vérifiant la même stabilité, les processus gaussiens ont la particularité d'admettre des outils statistiques facilement interprétables comme la moyenne et la covariance. L'existence de ces moments permet d'esquisser des méthodes statistiques en séparation des sources sonores (SSS) et plus généralement, en traitement du signal. La faiblesse de ces processus réside néanmoins dans l'incapacité à s'écarter trop loin de leurs moyennes. Cela limite la dynamique des signaux modélisables et peut provoquer des instabilités dans les méthodes d'inférence considérées. En dépit de non-existence d'une forme analytique des densités de probabilités, les processus α - stables jouissent de résultats non valables dans le cas gaussien. Par exemple, un vecteur α - stable non-gaussien admet une représentation spatiale unique. En résumé, le comportement d'une distribution multivariée α - stable est contrôlé par deux opérateurs. Une mesure dite «spectrale» informant sur l'énergie globale venant de chaque direction de l'espace et un vecteur localisant le centroïde de sa densité de probabilité. Ce mémoire de thèse introduit différents modèles α - stables d’un point de vue théorique et les développe dans plusieurs directions. Nous proposons notamment une extension de la théorie de filtrage α - stable monocanal au cas multicanal. En particulier, une nouvelle représentation spatiale pour les vecteurs α - stables est adoptée. Nous développons en outre un modèle de débruitage où le bruit et la parole découlent de distributions α - stables mais ayant un exposant caractéristique α différent. La valeur d' α permet de contrôler la stationnarité de chaque source. Grâce à ce modèle hybride, nous avons également déduit une explication rigoureuse sur des filtrages de Wiener heuristiques esquissés dans les années 80. Une autre partie de ce manuscrit décrit en outre comment la théorie α - stable permet de fournir une méthode pour la localisation de sources sonores. En pratique, elle nous permet d'en déduire si une source est active à un endroit précis de l'espace. / It is classic in signal processing to model the observed signal as the sum of desired signals. If we adopt a probabilistic model, it is preferable that law of the additive processes is stable by summation. The Gaussian process notoriously satisfies this condition. It admits useful statistical operators as the covariance and the mean. The existence of those moments allows to provide a statistical model for SSS. However, Gaussian process has difficulty to deviate from its mean. This drawback limits signal dynamics and may cause unstable inference methods. On the contrary, non-Gaussian α - stable processes are stable under addition, and permit the modeling of signals with considerable dynamics. For the last few decades, α -stable theory have raised mathematical challenges and have already been shown to be effective in filtering applications. This class of processes enjoys outstanding properties, not available in the Gaussian case. A major asset for signal processing is the unique spatial representation of a multivariate α - stable vector, controlled by a so-called spectral measure and a deterministic vector. The spectral measure provides information on the global energy coming from all space directions while the vector localizes the centroid of the probability density function. This thesis introduces several α -stables models, with the aim of extending them in several directions. First, we propose an extension of single-channel α - stable filtering theory to a multichannel one. In particular, a novel spatial representation for α - stable vectors is proposed. Secondly, we develop α - stable models for denoising where each component could admit a different α . This hybrid model provides a rigorous explanation of some heuristic Wiener filters outlined in the 1980s. We also describe how the α - stable theory yields a new method for audio source localization. We use the spectral measure resulting from the spatial representation of α - stable vectors. In practice, it leads to determine whether a source is active at a specific location. Our work consisted in investigating the α -stable theory for signal processing and developing several models for a wide range of applications. The models introduced in this thesis could also be extend to more signal processing tasks. We could use our mutivariate α - stable models to dereverberation or SSS. Moreover, the localization algorithm is implementable for room geometry estimation.
67

Using the 3D shape of the nose for biometric authentication

Emambakhsh, Mehryar January 2014 (has links)
This thesis is dedicated to exploring the potential of the 3D shape of the nasal region for face recognition. In comparison to other parts of the face, the nose has a number of distinctive features that make it attractive for recognition purposes. It is relatively stable over different facial expressions, easy to detect because of its salient convexity, and difficult to be intentionally cover up without attracting suspicion. In addition compared to other facial parts, such as forehead, chin, mouth and eyes, the nose is not vulnerable to unintentional occlusions caused by scarves or hair. Prior to undertaking a thorough analysis of the discriminative features of the 3D nasal regions, an overview of denoising algorithms and their impact on the 3D face recognition algorithms is first provided. This analysis, which is one of the first to address this issue, evaluates the performance of 3D holistic algorithms when various denoising methods are applied. One important outcome of this evaluation is to determine the optimal denoising parameters in terms of the overall 3D face recognition performance. A novel algorithm is also proposed to learn the statistics of the noise generated by the 3D laser scanners and then simulate it over the face point clouds. Using this process, the denoising and 3D face recognition algorithms’ robustness over various noise powers can be quantitatively evaluated. A new algorithm is proposed to find the nose tip from various expressions and self-occluded samples. Furthermore, novel applications of the nose region to align the faces in 3D is provided through two pose correction methods. The algorithms are very consistent and robust against different expressions, partial and self-occlusions. The nose’s discriminative strength for 3D face recognition is analysed using two approaches. The first one creates its feature sets by applying nasal curves to the depth map. The second approach utilises a novel feature space, based on histograms of normal vectors to the response of the Gabor wavelets applied to the nasal region. To create the feature spaces, various triangular and spherical patches and nasal curves are employed, giving a very high class separability. A genetic algorithm (GA) based feature selector is then used to make the feature space more robust against facial expressions. The basis of both algorithms is a highly consistent and accurate nasal region landmarking, which is quantitatively evaluated and compared with previous work. The recognition ranks provide the highest identification performance ever reported for the 3D nasal region. The results are not only higher than the previous 3D nose recognition algorithms, but also better than or very close to recent results for whole 3D face recognition. The algorithms have been evaluated on three widely used 3D face datasets, FRGC, Bosphorus and UMB-DB.
68

Odstranění rozmazání pomocí dvou snímků s různou délkou expozice / Odstranění rozmazání pomocí dvou snímků s různou délkou expozice

Sabo, Jozef January 2012 (has links)
In the presented work we study methods of image deblurring using two images of the same scene with different exposure times, focusing on two main approach categories, the so called deconvolution and non-deconvolution methods. We present theoretical backgrounds on both categories and evaluate their limitations and advantages. We dedicate one section to a comparison of both method categories on test data (images) for which we use a MATLAB implementation of the methods. We also compare the effectiveness of said methods against the results of a selected single- image de-noising algorithm. We do not focus at computational efficiency of algorithms and work with grayscale images only.
69

A Comparison of Data Transformations in Image Denoising

Michael, Simon January 2018 (has links)
The study of signal processing has wide applications, such as in hi-fi audio, television, voice recognition and many other areas. Signals are rarely observed without noise, which obstruct our analysis of signals. Hence, it is of great interest to study the detection, approximation and removal of noise.  In this thesis we compare two methods for image denoising. The methods are each based on a data transformation. Specifically, Fourier Transform and Singular Value Decomposition are utilized in respective methods and compared on grayscale images. The comparison is based on the visual quality of the resulting image, the maximum peak signal-to-noise ratios attainable for the respective methods and their computational time. We find that the methods are fairly equal in visual quality. However, the method based on the Fourier transform scores higher in peak signal-to-noise ratio and demands considerably less computational time.
70

Methodes multiresolutions non-lineaires. Applications au traitement d'image

Matei, Basarab 20 November 2002 (has links) (PDF)
CETTE THÈSE INTRODUIT UNE CLASSE DES TRANSFORMÉS MULTI­ ÉCHELLES BIDIMENSIONNELLES ADAPTÉES AUX CONTOURS. CELLES-CI SONT DIFFÉ­RENTES DES TRANSFORMÉES EN ONDELETTES BIDIMENSIONNELLES, CAR ELLES SONT BASÉES SUR DES OPÉRATEURS NONLINÉAIRES DEPENDENTS DES DONNÉES. CES OPÉRATEURS SONT INSPIRÉS DES OPÉRATEURS D'INTERPOLATION ENO INTRODUITS PAR HARTEN ET OSHER DANS LE CONTEXTE DE LA SIMULATION NUMÉRIQUES DES ONDES DE CHOC. LE BUT EST D'INCLURE DANS LA TRANSFORMÉE UN TRAITEMENT SPECIFIC DES CONTOURS QUI, EN TENANT COMPTE DE LEURS RÉGULARITÉ GEOMETRIQUE, PERMETTRAS D'OBTENIR DES REPRÉSENTATIONS PLUS CREUSES ET DONC DES MEILLEU­RES PROPRIÉTÉS D'APPROXIMATIONS. D'UN POINT DE VUE THÉORIQUE ON S'INTERESSE À LA CONSERVATION DES MÊME PROPRIÉTÉS DE CONCENTRATION POUR LES ESPACES FONCTIONNELS CLASSIQUES (BESOV ET $BV$), ET ON S'INTERROGE AUSSI SUR LA STABILITÉ DE CES DÉ­COMPOSITIONS. CE PROBLÈME EST LOIN D'ÊTRE AUSSI SIMPLE QUE DANS LE CAS DES REPRÉ­SENTATIONS LINÉAIRES. NOUS ABORDONS DANS CETTE THÈSE CHACUNE DE CES DIFFICULTÉS, ET NOUS Y APPORTONS DES ÉLÉMENTS DE RÉPONSE, AINSI QUE DES TESTS NUMÉRIQUES VISANT À ÉVALUER CONCRÈTEMENT LES PERFORMANCES DES MÉTHODES PROPOSÉES.

Page generated in 0.0628 seconds