• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 82
  • 15
  • 6
  • 6
  • 3
  • 3
  • 2
  • 2
  • 2
  • 1
  • 1
  • 1
  • 1
  • 1
  • Tagged with
  • 155
  • 155
  • 45
  • 44
  • 33
  • 31
  • 30
  • 29
  • 29
  • 28
  • 27
  • 27
  • 23
  • 23
  • 22
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
101

Eκτίμηση της συνάρτησης πυκνότητας πιθανότητας παραμέτρων που προέρχονται από σήματα πηγών ακουστικής εκπομπής

Γρενζελιάς, Αναστάσιος 25 June 2009 (has links)
Στη συγκεκριμένη εργασία ασχολήθηκα με την εκτίμηση της συνάρτησης πυκνότητας πιθανότητας παραμέτρων που προέρχονται από σήματα πηγών ακουστικής εκπομπής που επεξεργάστηκα. Στο θεωρητικό κομμάτι το μεγαλύτερο ενδιαφέρον παρουσίασαν ο Μη Καταστροφικός Έλεγχος και η Ακουστική Εκπομπή, καθώς και οι εφαρμογές τους. Τα δεδομένα που επεξεργάστηκα χωρίζονται σε δύο κατηγορίες: σε εκείνα που μου δόθηκαν έτοιμα και σε εκείνα που λήφθηκαν μετά από μετρήσεις. Στην επεξεργασία των πειραματικών δεδομένων χρησιμοποιήθηκε ο αλγόριθμος πρόβλεψης-μεγιστοποίησης, τον οποίο μελέτησα θεωρητικά και με βάση τον οποίο εξάχθηκαν οι παράμετροι για κάθε σήμα. Έχοντας βρει τις παραμέτρους, προχώρησα στην ταξινόμηση των σημάτων σε κατηγορίες με βάση τη θεωρία της αναγνώρισης προτύπων. Στο τέλος της εργασίας παρατίθεται το παράρτημα με τα αναλυτικά αποτελέσματα, καθώς και η βιβλιογραφία που χρησιμοποίησα. / In this diploma paper the subject was the calculation of the probability density function of parameters which come from signals of sources of acoustic emission. In the theoritical part, the chapters with the greatest interest were Non Destructive Control and Acoustic Emission and their applications. The data which were processed are divided in two categories: those which were given without requiring any laboratory research and those which demanded laboratory research. The expectation-maximization algorithm, which was used in the process of the laboratory data, was the basis for the calculation of the parameters of each signal. Having calculated the parameters, the signals were classified in categories according to the theory of pattern recognition. In the end of the paper, the results and the bibliography which was used are presented.
102

Motion Capture of Deformable Surfaces in Multi-View Studios

Cagniart, Cedric 16 July 2012 (has links) (PDF)
In this thesis we address the problem of digitizing the motion of three-dimensional shapes that move and deform in time. These shapes are observed from several points of view with cameras that record the scene's evolution as videos. Using available reconstruction methods, these videos can be converted into a sequence of three-dimensional snapshots that capture the appearance and shape of the objects in the scene. The focus of this thesis is to complement appearance and shape with information on the motion and deformation of objects. In other words, we want to measure the trajectory of every point on the observed surfaces. This is a challenging problem because the captured videos are only sequences of images, and the reconstructed shapes are built independently from each other. While the human brain excels at recreating the illusion of motion from these snapshots, using them to automatically measure motion is still largely an open problem. The majority of prior works on the subject has focused on tracking the performance of one human actor, and used the strong prior knowledge on the articulated nature of human motion to handle the ambiguity and noise inherent to visual data. In contrast, the presented developments consist of generic methods that allow to digitize scenes involving several humans and deformable objects of arbitrary nature. To perform surface tracking as generically as possible, we formulate the problem as the geometric registration of surfaces and deform a reference mesh to fit a sequence of independently reconstructed meshes. We introduce a set of algorithms and numerical tools that integrate into a pipeline whose output is an animated mesh. Our first contribution consists of a generic mesh deformation model and numerical optimization framework that divides the tracked surface into a collection of patches, organizes these patches in a deformation graph and emulates elastic behavior with respect to the reference pose. As a second contribution, we present a probabilistic formulation of deformable surface registration that embeds the inference in an Expectation-Maximization framework that explicitly accounts for the noise and in the acquisition. As a third contribution, we look at how prior knowledge can be used when tracking articulated objects, and compare different deformation model with skeletal-based tracking. The studies reported by this thesis are supported by extensive experiments on various 4D datasets. They show that in spite of weaker assumption on the nature of the tracked objects, the presented ideas allow to process complex scenes involving several arbitrary objects, while robustly handling missing data and relatively large reconstruction artifacts.
103

Mixture model analysis with rank-based samples

Hatefi, Armin January 2013 (has links)
Simple random sampling (SRS) is the most commonly used sampling design in data collection. In many applications (e.g., in fisheries and medical research) quantification of the variable of interest is either time-consuming or expensive but ranking a number of sampling units, without actual measurement on them, can be done relatively easy and at low cost. In these situations, one may use rank-based sampling (RBS) designs to obtain more representative samples from the underlying population and improve the efficiency of the statistical inference. In this thesis, we study the theory and application of the finite mixture models (FMMs) under RBS designs. In Chapter 2, we study the problems of Maximum Likelihood (ML) estimation and classification in a general class of FMMs under different ranked set sampling (RSS) designs. In Chapter 3, deriving Fisher information (FI) content of different RSS data structures including complete and incomplete RSS data, we show that the FI contained in each variation of the RSS data about different features of FMMs is larger than the FI contained in their SRS counterparts. There are situations where it is difficult to rank all the sampling units in a set with high confidence. Forcing rankers to assign unique ranks to the units (as RSS) can lead to substantial ranking error and consequently to poor statistical inference. We hence focus on the partially rank-ordered set (PROS) sampling design, which is aimed at reducing the ranking error and the burden on rankers by allowing them to declare ties (partially ordered subsets) among the sampling units. Studying the information and uncertainty structures of the PROS data in a general class of distributions, in Chapter 4, we show the superiority of the PROS design in data analysis over RSS and SRS schemes. In Chapter 5, we also investigate the ML estimation and classification problems of FMMs under the PROS design. Finally, we apply our results to estimate the age structure of a short-lived fish species based on the length frequency data, using SRS, RSS and PROS designs.
104

Design of robust blind detector with application to watermarking

Anamalu, Ernest Sopuru 14 February 2014 (has links)
One of the difficult issues in detection theory is to design a robust detector that takes into account the actual distribution of the original data. The most commonly used statistical detection model for blind detection is Gaussian distribution. Specifically, linear correlation is an optimal detection method in the presence of Gaussian distributed features. This has been found to be sub-optimal detection metric when density deviates completely from Gaussian distributions. Hence, we formulate a detection algorithm that enhances detection probability by exploiting the true characterises of the original data. To understand the underlying distribution function of data, we employed the estimation techniques such as parametric model called approximated density ratio logistic regression model and semiparameric estimations. Semiparametric model has the advantages of yielding density ratios as well as individual densities. Both methods are applicable to signals such as watermark embedded in spatial domain and outperform the conventional linear correlation non-Gaussian distributed.
105

Received signal strength calibration for wireless local area network localization

Felix, Diego 11 August 2010 (has links)
Terminal localization for indoor Wireless Local Area Networks (WLAN) is critical for the deployment of location-aware computing inside of buildings. The purpose of this research work is not to develop a novel WLAN terminal location estimation technique or algorithm, but rather to tackle challenges in survey data collection and in calibration of multiple mobile terminal Received Signal Strength (RSS) data. Three major challenges are addressed in this thesis: first, to decrease the influence of outliers introduced in the distance measurements by Non-Line-of-Sight (NLoS) propagation when a ultrasonic sensor network is used for data collection; second, to obtain high localization accuracy in the presence of fluctuations of the RSS measurements caused by multipath fading; and third, to determine an automated calibration method to reduce large variations in RSS levels when different mobile devices need to be located. In this thesis, a robust window function is developed to mitigate the influence of outliers in survey terminal localization. Furthermore, spatial filtering of the RSS signals to reduce the effect of the distance-varying portion of noise is proposed. Two different survey point geometries are tested with the noise reduction technique: survey points arranged in sets of tight clusters and survey points uniformly distributed over the network area. Finally, an affine transformation is introduced as RSS calibration method between mobile devices to decrease the effect of RSS level variation and an automated calibration procedure based on the Expectation-Maximization (EM) algorithm is developed. The results show that the mean distance error in the survey terminal localization is well within an acceptable range for data collection. In addition, when the spatial averaging noise reduction filter is used the location accuracy improves by 16% and by 18% when the filter is applied to a clustered survey set as opposed to a straight-line survey set. Lastly, the location accuracy is within 2m when an affine function is used for RSS calibration and the automated calibration algorithm converged to the optimal transformation parameters after it was iterated for 11 locations.
106

Graphical Models for Robust Speech Recognition in Adverse Environments

Rennie, Steven J. 01 August 2008 (has links)
Robust speech recognition in acoustic environments that contain multiple speech sources and/or complex non-stationary noise is a difficult problem, but one of great practical interest. The formalism of probabilistic graphical models constitutes a relatively new and very powerful tool for better understanding and extending existing models, learning, and inference algorithms; and a bedrock for the creative, quasi-systematic development of new ones. In this thesis a collection of new graphical models and inference algorithms for robust speech recognition are presented. The problem of speech separation using multiple microphones is first treated. A family of variational algorithms for tractably combining multiple acoustic models of speech with observed sensor likelihoods is presented. The algorithms recover high quality estimates of the speech sources even when there are more sources than microphones, and have improved upon the state-of-the-art in terms of SNR gain by over 10 dB. Next the problem of background compensation in non-stationary acoustic environments is treated. A new dynamic noise adaptation (DNA) algorithm for robust noise compensation is presented, and shown to outperform several existing state-of-the-art front-end denoising systems on the new DNA + Aurora II and Aurora II-M extensions of the Aurora II task. Finally, the problem of speech recognition in speech using a single microphone is treated. The Iroquois system for multi-talker speech separation and recognition is presented. The system won the 2006 Pascal International Speech Separation Challenge, and amazingly, achieved super-human recognition performance on a majority of test cases in the task. The result marks a significant first in automatic speech recognition, and a milestone in computing.
107

Design of robust blind detector with application to watermarking

Anamalu, Ernest Sopuru 14 February 2014 (has links)
One of the difficult issues in detection theory is to design a robust detector that takes into account the actual distribution of the original data. The most commonly used statistical detection model for blind detection is Gaussian distribution. Specifically, linear correlation is an optimal detection method in the presence of Gaussian distributed features. This has been found to be sub-optimal detection metric when density deviates completely from Gaussian distributions. Hence, we formulate a detection algorithm that enhances detection probability by exploiting the true characterises of the original data. To understand the underlying distribution function of data, we employed the estimation techniques such as parametric model called approximated density ratio logistic regression model and semiparameric estimations. Semiparametric model has the advantages of yielding density ratios as well as individual densities. Both methods are applicable to signals such as watermark embedded in spatial domain and outperform the conventional linear correlation non-Gaussian distributed.
108

Enhanced classification approach with semi-supervised learning for reliability-based system design

Patel, Jiten 02 July 2012 (has links)
Traditionally design engineers have used the Factor of Safety method for ensuring that designs do not fail in the field. Access to advanced computational tools and resources have made this process obsolete and new methods to introduce higher levels of reliability in an engineering systems are currently being investigated. However, even though high computational resources are available the computational resources required by reliability analysis procedures leave much to be desired. Furthermore, the regression based surrogate modeling techniques fail when there is discontinuity in the design space, caused by failure mechanisms, when the design is required to perform under severe externalities. Hence, in this research we propose efficient Semi-Supervised Learning based surrogate modeling techniques that will enable accurate estimation of a system's response, even under discontinuity. These methods combine the available set of labeled dataset and unlabeled dataset and provide better models than using labeled data alone. Labeled data is expensive to obtain since the responses have to be evaluated whereas unlabeled data is available in plenty, during reliability estimation, since the PDF information of uncertain variables is assumed to be known. This superior performance is gained by combining the efficiency of Probabilistic Neural Networks (PNN) for classification and Expectation-Maximization (EM) algorithm for treating the unlabeled data as labeled data with hidden labels.
109

Motion Capture of Deformable Surfaces in Multi-View Studios / Acquisition de surfaces déformables à partir d'un système multicaméra calibré

Cagniart, Cédric 16 July 2012 (has links)
Cette thèse traite du suivi temporel de surfaces déformables. Ces surfaces sont observées depuis plusieurs points de vue par des caméras qui capturent l'évolution de la scène et l'enregistrent sous la forme de vidéos. Du fait des progrès récents en reconstruction multi-vue, cet ensemble de vidéos peut être converti en une série de clichés tridimensionnels qui capturent l'apparence et la forme des objets dans la scène. Le problème au coeur des travaux rapportés par cette thèse est de complémenter les informations d'apparence et de forme avec des informations sur les mouvements et les déformations des objets. En d'autres mots, il s'agit de mesurer la trajectoire de chacun des points sur les surfaces observées. Ceci est un problème difficile car les vidéos capturées ne sont que des séquences d'images, et car les formes reconstruites à chaque instant le sont indépendemment les unes des autres. Si le cerveau humain excelle à recréer l'illusion de mouvement à partir de ces clichés, leur utilisation pour la mesure automatisée du mouvement reste une question largement ouverte. La majorité des précédents travaux sur le sujet se sont focalisés sur la capture du mouvement humain et ont bénéficié de la nature articulée de ce mouvement qui pouvait être utilisé comme a-priori dans les calculs. La spécificité des développements présentés ici réside dans la généricité des méthodes qui permettent de capturer des scènes dynamiques plus complexes contenant plusieurs acteurs et différents objets déformables de nature inconnue a priori. Pour suivre les surfaces de la façon la plus générique possible, nous formulons le problème comme celui de l'alignement géométrique de surfaces, et déformons un maillage de référence pour l'aligner avec les maillages indépendemment reconstruits de la séquence. Nous présentons un ensemble d'algorithmes et d'outils numériques intégrés dans une chaîne de traitements dont le résultat est un maillage animé. Notre première contribution est une méthode de déformation de maillage qui divise la surface en une collection de morceaux élémentaires de surfaces que nous nommons patches. Ces patches sont organisés dans un graphe de déformation, et une force est appliquée sur cette structure pour émuler une déformation élastique par rapport à la pose de référence. Comme seconde contribution, nous présentons une formulation probabiliste de l'alignement de surfaces déformables qui modélise explicitement le bruit dans le processus d'acquisition. Pour finir, nous étudions dans quelle mesure les a-prioris sur la nature articulée du mouvement peuvent aider, et comparons différents modèles de déformation à une méthode de suivi de squelette. Les développements rapportés par cette thèse sont validés par de nombreuses expériences sur une variété de séquences. Ces résultats montrent qu'en dépit d'a-prioris moins forts sur les surfaces suivies, les idées présentées permettent de traiter des scènes complexes contenant de multiples objets tout en se comportant de façon robuste vis-a-vis de données fragmentaires et d'erreurs de reconstruction. / In this thesis we address the problem of digitizing the motion of three-dimensional shapes that move and deform in time. These shapes are observed from several points of view with cameras that record the scene's evolution as videos. Using available reconstruction methods, these videos can be converted into a sequence of three-dimensional snapshots that capture the appearance and shape of the objects in the scene. The focus of this thesis is to complement appearance and shape with information on the motion and deformation of objects. In other words, we want to measure the trajectory of every point on the observed surfaces. This is a challenging problem because the captured videos are only sequences of images, and the reconstructed shapes are built independently from each other. While the human brain excels at recreating the illusion of motion from these snapshots, using them to automatically measure motion is still largely an open problem. The majority of prior works on the subject has focused on tracking the performance of one human actor, and used the strong prior knowledge on the articulated nature of human motion to handle the ambiguity and noise inherent to visual data. In contrast, the presented developments consist of generic methods that allow to digitize scenes involving several humans and deformable objects of arbitrary nature. To perform surface tracking as generically as possible, we formulate the problem as the geometric registration of surfaces and deform a reference mesh to fit a sequence of independently reconstructed meshes. We introduce a set of algorithms and numerical tools that integrate into a pipeline whose output is an animated mesh. Our first contribution consists of a generic mesh deformation model and numerical optimization framework that divides the tracked surface into a collection of patches, organizes these patches in a deformation graph and emulates elastic behavior with respect to the reference pose. As a second contribution, we present a probabilistic formulation of deformable surface registration that embeds the inference in an Expectation-Maximization framework that explicitly accounts for the noise and in the acquisition. As a third contribution, we look at how prior knowledge can be used when tracking articulated objects, and compare different deformation model with skeletal-based tracking. The studies reported by this thesis are supported by extensive experiments on various 4D datasets. They show that in spite of weaker assumption on the nature of the tracked objects, the presented ideas allow to process complex scenes involving several arbitrary objects, while robustly handling missing data and relatively large reconstruction artifacts.
110

Modelos de mistura de distribuições na segmentação de imagens SAR polarimétricas multi-look / Multi-look polarimetric SAR image segmentation using mixture models

Michelle Matos Horta 04 June 2009 (has links)
Esta tese se concentra em aplicar os modelos de mistura de distribuições na segmentação de imagens SAR polarimétricas multi-look. Dentro deste contexto, utilizou-se o algoritmo SEM em conjunto com os estimadores obtidos pelo método dos momentos para calcular as estimativas dos parâmetros do modelo de mistura das distribuições Wishart, Kp ou G0p. Cada uma destas distribuições possui parâmetros específicos que as diferem no ajuste dos dados com graus de homogeneidade variados. A distribuição Wishart descreve bem regiões com características mais homogêneas, como cultivo. Esta distribuição é muito utilizada na análise de dados SAR polarimétricos multi-look. As distribuições Kp e G0p possuem um parâmetro de rugosidade que as permitem descrever tanto regiões mais heterogêneas, como vegetação e áreas urbanas, quanto regiões homogêneas. Além dos modelos de mistura de uma única família de distribuições, também foi analisado o caso de um dicionário contendo as três famílias. Há comparações do método SEM proposto para os diferentes modelos com os métodos da literatura k-médias e EM utilizando imagens reais da banda L. O método SEM com a mistura de distribuições G0p forneceu os melhores resultados quando os outliers da imagem são desconsiderados. A distribuição G0p foi a mais flexível ao ajuste dos diferentes tipos de alvo. A distribuição Wishart foi robusta às diferentes inicializações. O método k-médias com a distribuição Wishart é robusto à segmentação de imagens contendo outliers, mas não é muito flexível à variabilidade das regiões heterogêneas. O modelo de mistura do dicionário de famílias melhora a log-verossimilhança do método SEM, mas apresenta resultados parecidos com os do modelo de mistura G0p. Para todos os tipos de inicialização e grupos, a distribuição G0p predominou no processo de seleção das distribuições do dicionário de famílias. / The main focus of this thesis consists of the application of mixture models in multi-look polarimetric SAR image segmentation. Within this context, the SEM algorithm, together with the method of moments, were applied in the estimation of the Wishart, Kp and G0p mixture model parameters. Each one of these distributions has specific parameters that allows fitting data with different degrees of homogeneity. The Wishart distribution is suitable for modeling homogeneous regions, like crop fields for example. This distribution is widely used in multi-look polarimetric SAR data analysis. The distributions Kp and G0p have a roughness parameter that allows them to describe both heterogeneous regions, as vegetation and urban areas, and homogeneous regions. Besides adopting mixture models of a single family of distributions, the use of a dictionary with all the three family of distributions was proposed and analyzed. Also, a comparison between the performance of the proposed SEM method, considering the different models in real L-band images and two widely known techniques described in literature (k-means and EM algorithms), are shown and discussed. The proposed SEM method, considering a G0p mixture model combined with a outlier removal stage, provided the best classication results. The G0p distribution was the most flexible for fitting the different kinds of data. The Wishart distribution was robust for different initializations. The k-means algorithm with Wishart distribution is robust for segmentation of SAR images containing outliers, but it is not so flexible to variabilities in heterogeneous regions. The mixture model considering the dictionary of distributions improves the SEM method log-likelihood, but presents similar results to those of G0p mixture model. For all types of initializations and clusters, the G0p prevailed in the distribution selection process of the dictionary of distributions.

Page generated in 0.1666 seconds