• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 97
  • 36
  • 28
  • 27
  • 16
  • 10
  • 6
  • 4
  • 2
  • 2
  • 1
  • 1
  • 1
  • 1
  • 1
  • Tagged with
  • 268
  • 56
  • 36
  • 32
  • 31
  • 30
  • 29
  • 29
  • 23
  • 22
  • 22
  • 21
  • 21
  • 20
  • 20
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
241

3D imaging using time-correlated single photon counting

Neimert-Andersson, Thomas January 2010 (has links)
This project investigates a laser radar system. The system is based on the principles of time-correlated single photon counting, and by measuring the times-of-flight of reflected photons it can find range profiles and perform three-dimensional imaging of scenes. Because of the photon counting technique the resolution and precision that the system can achieve is very high compared to analog systems. These properties make the system interesting for many military applications. For example, the system can be used to interrogate non-cooperative targets at a safe distance in order to gather intelligence. However, signal processing is needed in order to extract the information from the data acquired by the system. This project focuses on the analysis of different signal processing methods. The Wiener filter and the Richardson-Lucy algorithm are used to deconvolve the data acquired by the photon counting system. In order to find the positions of potential targets different approaches of non-linear least squares methods are tested, as well as a more unconventional method called ESPRIT. The methods are evaluated based on their ability to resolve two targets separated by some known distance and the accuracy with which they calculate the position of a single target, as well as their robustness to noise and their computational burden. Results show that fitting a curve made of a linear combination of asymmetric super-Gaussians to the data by a method of non-linear least squares manages to accurately resolve targets separated by 1.75 cm, which is the best result of all the methods tested. The accuracy for finding the position of a single target is similar between the methods but ESPRIT has a much faster computation time.
242

Defect Detection Via THz Imaging: Potentials & Limitations

Houshmand, Kaveh 22 May 2008 (has links)
Until recent years, terahertz (THz) waves were an undiscovered, or most importantly, an unexploited area of electromagnetic spectrum. This was due to difficulties in generation and detection of THz waves. Recent advances in hardware technology have started to open up the field to new applications such as THz imaging. This non-destructive and non-contact imaging technique can penetrate through diverse materials such that internal structures, in some cases invisible to other imaging modalities, can be visualized. Today, there are variety of techniques available to generate and detect THz waves in both pulsed and continuous fashion in two different geometries; transition, and reflection modes. In this thesis continuous wave THz imaging was employed for higher spatial resolution. However, with any new technology comes its challenges; automated processing of THz images can be quite cumbersome. Low contrast and the presence of a widely unknown type of noise make the analysis of these images difficult. In this work, there is an attempt to detect defects in composite material via segmentation by using a Terahertz imaging system. According to our knowledge, this is the first time that this type of materials are being tested under Terahertz cameras to detect manufacturing defects in aerospace industry. In addition, segmentation accuracy of THz images have been investigated by using a phantom. Beyond the defect detection for composite materials, this can establish some general knowledge about Terahertz imaging, its capabilities and limitations. To be able to segment the THz images successfully, pre-processing techniques are inevitable. In this thesis, a variety of different image processing techniques, self-developed or available from literature, have been employed for image enhancement. These methods range from filtering to contrast adjustment to fusion of phase and amplitude images by using fuzzy set theory, to just name a few. The result of pre-procssing and segmentation methods demonstrates promising outcome for future work in this field.
243

Modèles de volterra à complexité réduite : estimation paramétrique et application à l'égalisation des canaux de communication

Kibangou, Alain Y. 28 January 2005 (has links) (PDF)
Une large classe de systèmes physiques peut être représentée à l'aide du modèle de Volterra. Il a notamment été montré que tout système non-linéaire, invariant dans le temps et à mémoire évanouissante peut être représenté par un modèle de Volterra de mémoire et d¤ordre finis. Ce modèle est donc particulièrement attrayant pour les besoins de modélisation et d'identification de systèmes non-linéaires. Un des atouts majeurs du modèle de Volterra est la linéarité par rapport à ses paramètres, c¤est à dire les coefficients de ses noyaux. Cette caractéristique permet d'étendre à ce modèle certains résultats établis pour l'identification des modèles linéaires. Il est à noter que le modèle de Volterra peut, par ailleurs, être vu comme une extension naturelle de la notion de réponse impulsionnelle des systèmes linéaires aux systèmes non-linéaires. Toutefois, certaines limitations sont à circonvenir: un nombre de paramètres qui peut être très élevé et un mauvais conditionnement de la matrice des moments de l'entrée intervenant dans l¤estimation du modèle au sens de l¤erreur quadratique moyenne minimale (EQMM). Il est à noter que ce mauvais conditionnement est aussi à l¤origine de la lenteur de convergence des algorithmes adaptatifs de type LMS (Least Mean Squares). Cette thèse traite principalement de ces deux questions. Les solutions apportées sont essentiellement basées sur la notion d'orthogonalité. D'une part, l'orthogonalité est envisagée vis à vis de la structure du modèle en développant les noyaux de Volterra sur une base orthogonale de fonctions rationnelles. Ce développement est d'autant plus parcimonieux que la base est bien choisie. Pour ce faire, nous avons développé de nouveaux outils d'optimisation des bases de Laguerre et BFOR (Base de Fonctions Orthonormales Rationnelles) pour la représentation des noyaux de Volterra. D'autre part, l'orthogonalité est envisagée en rapport avec les signaux d'entrée. En exploitant les propriétés statistiques de l¤entrée, des bases de polynômes orthogonaux multivariables ont été construites. Les paramètres du modèle de Volterra développé sur de telles bases sont alors estimés sans aucune inversion matricielle, ce qui simplifie significativement l¤estimation paramétrique au sens EQMM. L¤orthogonalisation des signaux d¤entrée a aussi été envisagée via une procédure de Gram-Schmidt. Dans un contexte adaptatif, il en résulte une accélération de la convergence des algorithmes de type LMS sans un surcoût de calcul excessif. Certains systèmes physiques peuvent être représentés à l¤aide d¤un modèle de Volterra simplifié, à faible complexité paramétrique, tel que le modèle de Hammerstein et celui de Wiener. C¤est le cas d¤un canal de communication représentant l'accès à un réseau sans fil via une fibre optique. Nous montrons notamment que les liaisons montante et descendante de ce canal peuvent respectivement être représentées par un modèle de Wiener et par un modèle de Hammerstein. Dans le cas mono-capteur, en utilisant un précodage de la séquence d'entrée, nous développons une solution permettant de réaliser l'estimation conjointe du canal de transmission et des symboles transmis de manière semiaveugle. Il est à noter que, dans le cas de la liaison montante, une configuration multi-capteurs peut aussi être envisagée. Pour une telle configuration, grâce à un précodage spécifique de la séquence d¤entrée, nous exploitons la diversité spatiale introduite par les capteurs et la diversité temporelle de sorte à obtenir une représentation tensorielle du signal reçu. En appliquant la technique de décomposition tensorielle dite PARAFAC, nous réalisons l'estimation conjointe du canal et des symboles émis de manière aveugle. Mots clés: Modélisation, Identification, Bases orthogonales, Base de Laguerre, Base de fonctions orthonormales rationnelles, Polynômes orthogonaux, Optimisation de pôles, Réduction de complexité, Egalisation, Modèle de Volterra, Modèle de Wiener, Modèle de Hammerstein, Décomposition PARAFAC.
244

Algebraic methods for constructing blur-invariant operators and their applications

Pedone, M. (Matteo) 09 August 2015 (has links)
Abstract Image acquisition devices are always subject to physical limitations that often manifest as distortions in the appearance of the captured image. The most common types of distortions can be divided into two categories: geometric and radiometric distortions. Examples of the latter ones are: changes in brightness, contrast, or illumination, sensor noise and blur. Since image blur can have many different causes, it is usually not convenient and also computationally expensive to develop ad hoc algorithms to correct each specific type of blur. Instead, it is often possible to extract a blur-invariant representation of the image, and utilize such information to make algorithms that are insensitive to blur. The work presented here mainly focuses on developing techniques for the extraction and the application of blur-invariant operators. This thesis contains several contributions. First, we propose a generalized framework based on group theory to constructively generate complete blur-invariants. We construct novel operators that are invariant to a large family of blurs occurring in real scenarios: namely, those blurs that can be modeled by a convolution with a point-spread function having rotational symmetry, or combined rotational and axial symmetry. A second important contribution is represented by the utilization of such operators to develop an algorithm for blur-invariant translational image registration. This algorithm is experimentally demonstrated to be more robust than other state-of-the-art registration techniques. The blur-invariant registration algorithm is then used as pre-processing steps to several restoration methods based on image fusion, like depth-of-field extension, and multi-channel blind deconvolution. All the described techniques are then re-interpreted as a particular instance of Wiener deconvolution filtering. Thus, the third main contribution is the generalization of the blur-invariants and the registration techniques to color images, by using respectively a representation of color images based on quaternions, and the quaternion Wiener filter. This leads to the development of a blur-and-noise-robust registration algorithm for color images. We observe experimentally a significant increase in performance in both color texture recognition, and in blurred color image registration. / Tiivistelmä Kuvauslaitteet ovat aina fyysisten olosuhteiden rajoittamia, mikä usein ilmenee tallennetun kuvan ilmiasun vääristyminä. Yleisimmät vääristymätyypit voidaan jakaa kahteen kategoriaan: geometrisiin ja radiometrisiin distortioihin. Jälkimmäisestä esimerkkejä ovat kirkkauden, kontrastin ja valon laadun muutokset sekä sensorin kohina ja kuvan sumeus. Koska kuvan sumeus voi johtua monista tekijöistä, yleensä ei ole tarkoitukseen sopivaa eikä laskennallisesti kannattavaa kehittää ad hoc algoritmeja erityyppisten sumeuksien korjaamiseen. Sitä vastoin on mahdollista erottaa kuvasta sumeuden invariantin edustuma ja käyttää tätä tietoa sumeudelle epäherkkien algoritmien tuottamiseen. Tässä väitöskirjassa keskitytään esittämään, millaisia eri tekniikoita voidaan käyttää sumeuden invarianttien operaattoreiden muodostamiseen ja sovellusten kehittämiseen. Tämä opinnäyte sisältää useammanlaista tieteellistä vaikuttavuutta. Ensiksi, väitöskirjassa esitellään ryhmäteoriaan perustuva yleinen viitekehys, jolla voidaan generoida sumeuden invariantteja. Konstruoimme uudentyyppisiä operaattoreita, jotka ovat monenlaiselle kuvaustilanteessa ilmenevälle sumeudelle invariantteja. Kyseessä ovat ne rotationaalisesti (ja/tai aksiaalisesti) symmetrisen sumeuden lajit, jotka voidaan mallintaa pistelähteen hajaantumisen funktion (PSF) konvoluutiolla. Toinen tämän väitöskirjan tärkeä tutkimuksellinen anti on esitettyjen sumeuden invarianttien operaattoreiden hyödyntäminen algoritmin kehittelyssä, joka on käytössä translatorisen kuvan rekisteröinnissä. Tällainen algoritmi on tässä tutkimuksessa osoitettu kokeellisesti johtavia kuvien rekisteröintitekniikoita robustimmaksi. Sumeuden invariantin rekisteröinnin algoritmia on käytetty esiprosessointina tässä tutkimuksessa useissa kuvien restaurointimenetelmissä, jotka perustuvat kuvan fuusioon, kuten syväterävyysaluelaajennus ja monikanavainen dekonvoluutio. Kaikki kuvatut tekniikat ovat lopulta uudelleen tulkittu erityistapauksena Wienerin dekonvoluution suodattimesta. Näin ollen tutkimuksen kolmas saavutus on sumeuden invarianttien ja rekisteröintiteknikoiden yleistäminen värikuviin käyttämällä värikuvien kvaternion edustumaa sekä Wienerin kvaternion suodatinta. Havaitsemme kokeellisesti merkittävän parannuksen sekä väritekstuurin tunnistuksessa että sumean kuvan rekisteröinnissä.
245

Motion picture restoration

Kokaram, Anil Christopher January 1993 (has links)
This dissertation presents algorithms for restoring some of the major corruptions observed in archived film or video material. The two principal problems of impulsive distortion (Dirt and Sparkle or Blotches) and noise degradation are considered. There is also an algorithm for suppressing the inter-line jitter common in images decoded from noisy video signals. In the case of noise reduction and Blotch removal the thesis considers image sequences to be three dimensional signals involving evolution of features in time and space. This is necessary if any process presented is to show an improvement over standard two-dimensional techniques. It is important to recognize that consideration of image sequences must involve an appreciation of the problems incurred by the motion of objects in the scene. The most obvious implication is that due to motion, useful three dimensional processing does not necessarily proceed in a direction 'orthogonal' to the image frames. Therefore, attention is given to discussing motion estimation as it is used for image sequence processing. Some discussion is given to image sequence models and the 3D Autoregressive model is investigated. A multiresolution BM scheme is used for motion estimation throughout the major part of the thesis. Impulsive noise removal in image processing has been traditionally achieved by the use of median filter structures. A new three dimensional multilevel median structure is presented in this work with the additional use of a detector which limits the distortion caused by the filters . This technique is found to be extremely effective in practice and is an alternative to the traditional global median operation. The new median filter is shown to be superior to those previously presented with respect to the ability to reject the kind of distortion found in practice. A model based technique using the 3D AR model is also developed for detecting and removing Blotches. This technique achieves better fidelity at the expense of heavier computational load. Motion compensated 3D IIR and FIR Wiener filters are investigated with respect to their ability to reject noise in an image sequence. They are compared to several algorithms previously presented which are purely temporal in nature. The filters presented are found to be effective and compare favourably to the other algorithms. The 3D filtering process is superior to the purely temporal process as expected. The algorithm that is presented for suppressing inter-line jitter uses a 2D AR model to estimate and correct the relative displacements between the lines. The output image is much more satisfactory to the observer although in a severe case some drift of image features is to be expected. A suggestion for removing this drift is presented in the conclusions. There are several remaining problems in moving video. In particular, line scratches and picture shake/roll. Line scratches cannot be detected successfully by the detectors presented and so cannot be removed efficiently. Suppressing shake and roll involves compensating the entire frame for motion and there is a need to separate global from local motion. These difficulties provide ample opportunity for further research.
246

Nonstationary Techniques For Signal Enhancement With Applications To Speech, ECG, And Nonuniformly-Sampled Signals

Sreenivasa Murthy, A January 2012 (has links) (PDF)
For time-varying signals such as speech and audio, short-time analysis becomes necessary to compute specific signal attributes and to keep track of their evolution. The standard technique is the short-time Fourier transform (STFT), using which one decomposes a signal in terms of windowed Fourier bases. An advancement over STFT is the wavelet analysis in which a function is represented in terms of shifted and dilated versions of a localized function called the wavelet. A specific modeling approach particularly in the context of speech is based on short-time linear prediction or short-time Wiener filtering of noisy speech. In most nonstationary signal processing formalisms, the key idea is to analyze the properties of the signal locally, either by first truncating the signal and then performing a basis expansion (as in the case of STFT), or by choosing compactly-supported basis functions (as in the case of wavelets). We retain the same motivation as these approaches, but use polynomials to model the signal on a short-time basis (“short-time polynomial representation”). To emphasize the local nature of the modeling aspect, we refer to it as “local polynomial modeling (LPM).” We pursue two main threads of research in this thesis: (i) Short-time approaches for speech enhancement; and (ii) LPM for enhancing smooth signals, with applications to ECG, noisy nonuniformly-sampled signals, and voiced/unvoiced segmentation in noisy speech. Improved iterative Wiener filtering for speech enhancement A constrained iterative Wiener filter solution for speech enhancement was proposed by Hansen and Clements. Sreenivas and Kirnapure improved the performance of the technique by imposing codebook-based constraints in the process of parameter estimation. The key advantage is that the optimal parameter search space is confined to the codebook. The Nonstationary signal enhancement solutions assume stationary noise. However, in practical applications, noise is not stationary and hence updating the noise statistics becomes necessary. We present a new approach to perform reliable noise estimation based on spectral subtraction. We first estimate the signal spectrum and perform signal subtraction to estimate the noise power spectral density. We further smooth the estimated noise spectrum to ensure reliability. The key contributions are: (i) Adaptation of the technique for non-stationary noises; (ii) A new initialization procedure for faster convergence and higher accuracy; (iii) Experimental determination of the optimal LP-parameter space; and (iv) Objective criteria and speech recognition tests for performance comparison. Optimal local polynomial modeling and applications We next address the problem of fitting a piecewise-polynomial model to a smooth signal corrupted by additive noise. Since the signal is smooth, it can be represented using low-order polynomial functions provided that they are locally adapted to the signal. We choose the mean-square error as the criterion of optimality. Since the model is local, it preserves the temporal structure of the signal and can also handle nonstationary noise. We show that there is a trade-off between the adaptability of the model to local signal variations and robustness to noise (bias-variance trade-off), which we solve using a stochastic optimization technique known as the intersection of confidence intervals (ICI) technique. The key trade-off parameter is the duration of the window over which the optimum LPM is computed. Within the LPM framework, we address three problems: (i) Signal reconstruction from noisy uniform samples; (ii) Signal reconstruction from noisy nonuniform samples; and (iii) Classification of speech signals into voiced and unvoiced segments. The generic signal model is x(tn)=s(tn)+d(tn),0 ≤ n ≤ N - 1. In problems (i) and (iii) above, tn=nT(uniform sampling); in (ii) the samples are taken at nonuniform instants. The signal s(t)is assumed to be smooth; i.e., it should admit a local polynomial representation. The problem in (i) and (ii) is to estimate s(t)from x(tn); i.e., we are interested in optimal signal reconstruction on a continuous domain starting from uniform or nonuniform samples. We show that, in both cases, the bias and variance take the general form: The mean square error (MSE) is given by where L is the length of the window over which the polynomial fitting is performed, f is a function of s(t), which typically comprises the higher-order derivatives of s(t), the order itself dependent on the order of the polynomial, and g is a function of the noise variance. It is clear that the bias and variance have complementary characteristics with respect to L. Directly optimizing for the MSE would give a value of L, which involves the functions f and g. The function g may be estimated, but f is not known since s(t)is unknown. Hence, it is not practical to compute the minimum MSE (MMSE) solution. Therefore, we obtain an approximate result by solving the bias-variance trade-off in a probabilistic sense using the ICI technique. We also propose a new approach to optimally select the ICI technique parameters, based on a new cost function that is the sum of the probability of false alarm and the area covered over the confidence interval. In addition, we address issues related to optimal model-order selection, search space for window lengths, accuracy of noise estimation, etc. The next issue addressed is that of voiced/unvoiced segmentation of speech signal. Speech segments show different spectral and temporal characteristics based on whether the segment is voiced or unvoiced. Most speech processing techniques process the two segments differently. The challenge lies in making detection techniques offer robust performance in the presence of noise. We propose a new technique for voiced/unvoiced clas-sification by taking into account the fact that voiced segments have a certain degree of regularity, and that the unvoiced segments do not possess any smoothness. In order to capture the regularity in voiced regions, we employ the LPM. The key idea is that regions where the LPM is inaccurate are more likely to be unvoiced than voiced. Within this frame-work, we formulate a hypothesis testing problem based on the accuracy of the LPM fit and devise a test statistic for performing V/UV classification. Since the technique is based on LPM, it is capable of adapting to nonstationary noises. We present Monte Carlo results to demonstrate the accuracy of the proposed technique.
247

Zabezpečení senzorů - ověření pravosti obrazu / Sensor Security - Verification of Image Authenticity

Juráček, Ivo January 2020 (has links)
Diploma thesis is about image sensor security. Goal of the thesis was study data integrity gained from the image sensors. Proposed method is about source camera identification from noise characteristics in image sensors. Research was about influence of denoising algorithms applied to digital images, which was acquired from 15 different image sensors. Finally the statistical evaluation had been done from computed results.
248

Eliminace zkreslení obrazů duhovky / Suppresion of distortion in iris images

Jalůvková, Lenka January 2014 (has links)
This master`s thesis is focused on a suppression of a distorsion in iris images. The aim of this work is to study and describe existing degradation methods (1D motion blur, uniform 2D motion blur, Gaussian blur, atmospheric turbulence blur, and out of focus blur). Furthermore, these methods are implemented and tested on a set of images. Then, we designed methods for suppression of these distorsions - inverse filtration, Wiener filtration and iterative deconvolution. All of these methods were tested and evaluated. Based on the experimental results, we can conclude that the Wiener-filter restoration is the most accurate approach from our test set. It achieves the best results in both normal and iterative mode.
249

Filtrace svalového rušení v EKG signálech / Muscle noise filtering in ECG signals

Novotný, Jiří January 2015 (has links)
This master's thesis deals with the optimization of numerical coefficients of the Wiener filter for muscle noise filtering in ECG signals. The theoretical part deals with ECG signal characteristic and muscle interference. It also contains a summary of the wavelet transform, wavelet Wiener's filtration, methods for calculating of the threshold and thresholding. In the last theoretical part the characteristic optimization techniques, the exhausive search and Nelder-Mead simplex method are mentioned, which were implemented in the practical part of this thesis in MATLAB. The functional verification and Wiener's filter optimization were tested on the standard electrocardiograms database CSE. By using the methods of exhausive search, the initial estimate for the solution method Nelder-Mead was obtained. The optimization method Nelder-Mead gives better results in the orders of hundredths or tenths than the method of exhausive search. The practical part is finished by the comparison of results of implemented algorithm with optimum coefficients, implemented in this thesis, with the results of other methods for filtering muscle interference in ECG signals.
250

Foreign Exchange Option Valuation under Stochastic Volatility

Rafiou, AS January 2009 (has links)
>Magister Scientiae - MSc / The case of pricing options under constant volatility has been common practise for decades. Yet market data proves that the volatility is a stochastic phenomenon, this is evident in longer duration instruments in which the volatility of underlying asset is dynamic and unpredictable. The methods of valuing options under stochastic volatility that have been extensively published focus mainly on stock markets and on options written on a single reference asset. This work probes the effect of valuing European call option written on a basket of currencies, under constant volatility and under stochastic volatility models. We apply a family of the stochastic models to investigate the relative performance of option prices. For the valuation of option under constant volatility, we derive a closed form analytic solution which relaxes some of the assumptions in the Black-Scholes model. The problem of two-dimensional random diffusion of exchange rates and volatilities is treated with present value scheme, mean reversion and non-mean reversion stochastic volatility models. A multi-factor Gaussian distribution function is applied on lognormal asset dynamics sampled from a normal distribution which we generate by the Box-Muller method and make inter dependent by Cholesky factor matrix decomposition. Furthermore, a Monte Carlo simulation method is adopted to approximate a general form of numeric solution The historic data considered dates from 31 December 1997 to 30 June 2008. The basket contains ZAR as base currency, USD, GBP, EUR and JPY are foreign currencies.

Page generated in 0.0686 seconds