Spelling suggestions: "subject:"expectations maximization.""
91 |
Graphical Models for Robust Speech Recognition in Adverse EnvironmentsRennie, Steven J. 01 August 2008 (has links)
Robust speech recognition in acoustic environments that contain multiple speech sources and/or complex non-stationary noise is a difficult problem, but one of great practical interest. The formalism of probabilistic graphical models constitutes a relatively new and very powerful tool for better understanding and extending existing
models, learning, and inference algorithms; and a bedrock for the creative, quasi-systematic development of new ones. In this thesis a collection of new graphical models and inference algorithms for robust speech recognition are presented.
The problem of speech separation using multiple microphones is first treated. A family of variational algorithms for tractably combining multiple acoustic models of speech with observed sensor likelihoods is presented. The algorithms recover high quality estimates of the speech sources even when there are more sources than microphones, and have improved upon the state-of-the-art in terms of SNR gain by over 10 dB.
Next the problem of background compensation in non-stationary acoustic environments is treated. A new dynamic noise adaptation (DNA) algorithm for robust noise compensation is presented, and shown to outperform several existing state-of-the-art
front-end denoising systems on the new DNA + Aurora II and Aurora II-M extensions of the Aurora II task.
Finally, the problem of speech recognition in speech using a single microphone is treated. The Iroquois system for multi-talker speech separation and recognition
is presented. The system won the 2006 Pascal International Speech Separation Challenge, and amazingly, achieved super-human recognition performance on a majority of test cases in the task. The result marks a significant first in automatic speech recognition, and a milestone in computing.
|
92 |
Modelos de mistura de distribuições na segmentação de imagens SAR polarimétricas multi-look / Multi-look polarimetric SAR image segmentation using mixture modelsHorta, Michelle Matos 04 June 2009 (has links)
Esta tese se concentra em aplicar os modelos de mistura de distribuições na segmentação de imagens SAR polarimétricas multi-look. Dentro deste contexto, utilizou-se o algoritmo SEM em conjunto com os estimadores obtidos pelo método dos momentos para calcular as estimativas dos parâmetros do modelo de mistura das distribuições Wishart, Kp ou G0p. Cada uma destas distribuições possui parâmetros específicos que as diferem no ajuste dos dados com graus de homogeneidade variados. A distribuição Wishart descreve bem regiões com características mais homogêneas, como cultivo. Esta distribuição é muito utilizada na análise de dados SAR polarimétricos multi-look. As distribuições Kp e G0p possuem um parâmetro de rugosidade que as permitem descrever tanto regiões mais heterogêneas, como vegetação e áreas urbanas, quanto regiões homogêneas. Além dos modelos de mistura de uma única família de distribuições, também foi analisado o caso de um dicionário contendo as três famílias. Há comparações do método SEM proposto para os diferentes modelos com os métodos da literatura k-médias e EM utilizando imagens reais da banda L. O método SEM com a mistura de distribuições G0p forneceu os melhores resultados quando os outliers da imagem são desconsiderados. A distribuição G0p foi a mais flexível ao ajuste dos diferentes tipos de alvo. A distribuição Wishart foi robusta às diferentes inicializações. O método k-médias com a distribuição Wishart é robusto à segmentação de imagens contendo outliers, mas não é muito flexível à variabilidade das regiões heterogêneas. O modelo de mistura do dicionário de famílias melhora a log-verossimilhança do método SEM, mas apresenta resultados parecidos com os do modelo de mistura G0p. Para todos os tipos de inicialização e grupos, a distribuição G0p predominou no processo de seleção das distribuições do dicionário de famílias. / The main focus of this thesis consists of the application of mixture models in multi-look polarimetric SAR image segmentation. Within this context, the SEM algorithm, together with the method of moments, were applied in the estimation of the Wishart, Kp and G0p mixture model parameters. Each one of these distributions has specific parameters that allows fitting data with different degrees of homogeneity. The Wishart distribution is suitable for modeling homogeneous regions, like crop fields for example. This distribution is widely used in multi-look polarimetric SAR data analysis. The distributions Kp and G0p have a roughness parameter that allows them to describe both heterogeneous regions, as vegetation and urban areas, and homogeneous regions. Besides adopting mixture models of a single family of distributions, the use of a dictionary with all the three family of distributions was proposed and analyzed. Also, a comparison between the performance of the proposed SEM method, considering the different models in real L-band images and two widely known techniques described in literature (k-means and EM algorithms), are shown and discussed. The proposed SEM method, considering a G0p mixture model combined with a outlier removal stage, provided the best classication results. The G0p distribution was the most flexible for fitting the different kinds of data. The Wishart distribution was robust for different initializations. The k-means algorithm with Wishart distribution is robust for segmentation of SAR images containing outliers, but it is not so flexible to variabilities in heterogeneous regions. The mixture model considering the dictionary of distributions improves the SEM method log-likelihood, but presents similar results to those of G0p mixture model. For all types of initializations and clusters, the G0p prevailed in the distribution selection process of the dictionary of distributions.
|
93 |
Magnetic Resonance Image segmentation using Pulse Coupled Neural NetworksSwathanthira Kumar, Murali Murugavel M 08 May 2009 (has links)
The Pulse Couple Neural Network (PCNN) was developed by Eckhorn to model the observed synchronization of neural assemblies in the visual cortex of small mammals such as a cat. In this dissertation, three novel PCNN based automatic segmentation algorithms were developed to segment Magnetic Resonance Imaging (MRI) data: (a) PCNN image 'signature' based single region cropping; (b) PCNN - Kittler Illingworth minimum error thresholding and (c) PCNN -Gaussian Mixture Model - Expectation Maximization (GMM-EM) based multiple material segmentation. Among other control tests, the proposed algorithms were tested on three T2 weighted acquisition configurations comprising a total of 42 rat brain volumes, 20 T1 weighted MR human brain volumes from Harvard's Internet Brain Segmentation Repository and 5 human MR breast volumes. The results were compared against manually segmented gold standards, Brain Extraction Tool (BET) V2.1 results, published results and single threshold methods. The Jaccard similarity index was used for numerical evaluation of the proposed algorithms. Our quantitative results demonstrate conclusively that PCNN based multiple material segmentation strategies can approach a human eye's intensity delineation capability in grayscale image segmentation tasks.
|
94 |
EM algorithm for Markov chains observed via Gaussian noise and point process information: Theory and case studiesDamian, Camilla, Eksi-Altay, Zehra, Frey, Rüdiger January 2018 (has links) (PDF)
In this paper we study parameter estimation via the Expectation Maximization (EM) algorithm for a continuous-time hidden Markov model with diffusion and point process observation. Inference problems of this type arise for instance in credit risk modelling. A key step in the application of the EM algorithm is the derivation of finite-dimensional filters for the quantities that are needed in the E-Step of the algorithm. In this context we obtain exact, unnormalized and robust filters, and we discuss their numerical implementation. Moreover, we propose several goodness-of-fit tests for hidden Markov models with Gaussian noise and point process observation. We run an extensive simulation study to test speed and accuracy of our methodology. The paper closes with an application to credit risk: we estimate the parameters of a hidden Markov model for credit quality where the observations consist of rating transitions and credit spreads for US corporations.
|
95 |
Modélisation gaussienne de rang plein des mélanges audio convolutifs appliquée à la séparation de sourcesDuong, Ngoc 15 November 2011 (has links) (PDF)
Nous considérons le problème de la séparation de mélanges audio réverbérants déterminés et sous-déterminés, c'est-à-dire l'extraction du signal de chaque source dans un mélange multicanal. Nous proposons un cadre général de modélisation gaussienne où la contribution de chaque source aux canaux du mélange dans le domaine temps-fréquence est modélisée par un vecteur aléatoire gaussien de moyenne nulle dont la covariance encode à la fois les caractéristiques spatiales et spectrales de la source. Afin de mieux modéliser la réverbération, nous nous affranchissons de l'hypothèse classique de bande étroite menant à une covariance spatiale de rang 1 et nous calculons la borne théorique de performance atteignable avec une covariance spatiale de rang plein. Les résultats expérimentaux indiquent une ugmentation du rapport Signal-à-Distorsion (SDR) de 6 dB dans un environnement faiblement à très réverbérant, ce qui valide cette généralisation. Nous considérons aussi l'utilisation de représentations temps-fréquence quadratiques et de l'échelle fréquentielle auditive ERB (equivalent rectangular bandwidth) pour accroître la quantité d'information exploitable et décroître le recouvrement entre les sources dans la représentation temps-fréquence. Après cette validation théorique du cadre proposé, nous nous focalisons sur l'estimation des paramètres du modèle à partir d'un signal de mélange donné dans un scénario pratique de séparation aveugle de sources. Nous proposons une famille d'algorithmes Expectation-Maximization (EM) pour estimer les paramètres au sens du maximum de vraisemblance (ML) ou du maximum a posteriori (MAP). Nous proposons une famille d'a priori de position spatiale inspirée par la théorie de l'acoustique des salles ainsi qu'un a priori de continuité spatiale. Nous étudions aussi l'utilisation de deux a priori spectraux précédemment utilisés dans un contexte monocanal ou multicanal de rang 1: un \textit{a priori} de continuité spatiale et un modèle de factorisation matricielle positive (NMF). Les résultats de séparation de sources obtenus par l'approche proposée sont comparés à plusieurs algorithmes de base et de l'état de l'art sur des mélanges simulés et sur des enregistrements réels dans des scénarios variés.
|
96 |
Multitemporal Spaceborne Polarimetric SAR Data for Urban Land Cover MappingNiu, Xin January 2012 (has links)
Urban land cover mapping represents one of the most important remote sensing applications in the context of rapid global urbanization. In recent years, high resolution spaceborne Polarimetric Synthetic Aperture Radar (PolSAR) has been increasingly used for urban land cover/land-use mapping, since more information could be obtained in multiple polarizations and the collection of such data is less influenced by solar illumination and weather conditions. The overall objective of this research is to develop effective methods to extract accurate and detailed urban land cover information from spaceborne PolSAR data. Six RADARSAT-2 fine-beam polarimetric SAR and three RADARSAT-2 ultra-fine beam SAR images were used. These data were acquired from June to September 2008 over the north urban-rural fringe of the Greater Toronto Area, Canada. The major landuse/land-cover classes in this area include high-density residential areas, low-density residential areas, industrial and commercial areas, construction sites, roads, streets, parks, golf courses, forests, pasture, water and two types of agricultural crops. In this research, various polarimetric SAR parameters were evaluated for urban land cover mapping. They include the parameters from Pauli, Freeman and Cloude-Pottier decompositions, coherency matrix, intensities of each polarization and their logarithms. Both object-based and pixel-based classification approaches were investigated. Through an object-based Support Vector Machine (SVM) and a rule-based approach, efficiencies of various PolSAR features and the multitemporal data combinations were evaluated. For the pixel-based approach, a contextual Stochastic Expectation-Maximization (SEM) algorithm was proposed. With an adaptive Markov Random Field (MRF) and a modified Multiscale Pappas Adaptive Clustering (MPAC), contextual information was explored to improve the mapping results. To take full advantages of alternative PolSAR distribution models, a rule-based model selection approach was put forward in comparison with a dictionary-based approach. Moreover, the capability of multitemporal fine-beam PolSAR data was compared with multitemporal ultra-fine beam C-HH SAR data. Texture analysis and a rule-based approach which explores the object features and the spatial relationships were applied for further improvement. Using the proposed approaches, detailed urban land-cover classes and finer urban structures could be mapped with high accuracy in contrast to most of the previous studies which have only focused on the extraction of urban extent or the mapping of very few urban classes. It is also one of the first comparisons of various PolSAR parameters for detailed urban mapping using an object-based approach. Unlike other multitemporal studies, the significance of complementary information from both ascending and descending SAR data and the temporal relationships in the data were the focus in the multitemporal analysis. Further, the proposed novel contextual analyses could effectively improve the pixel-based classification accuracy and present homogenous results with preserved shape details avoiding over-averaging. The proposed contextual SEM algorithm, which is one of the first to combine the adaptive MRF and the modified MPAC, was able to mitigate the degenerative problem in the traditional EM algorithms with fast convergence speed when dealing with many classes. This contextual SEM outperformed the contextual SVM in certain situations with regard to both accuracy and computation time. By using such a contextual algorithm, the common PolSAR data distribution models namely Wishart, G0p, Kp and KummerU were compared for detailed urban mapping in terms of both mapping accuracy and time efficiency. In the comparisons, G0p, Kp and KummerU demonstrated better performances with higher overall accuracies than Wishart. Nevertheless, the advantages of Wishart and the other models could also be effectively integrated by the proposed rule-based adaptive model selection, while limited improvement could be observed by the dictionary-based selection, which has been applied in previous studies. The use of polarimetric SAR data for identifying various urban classes was then compared with the ultra-fine-beam C-HH SAR data. The grey level co-occurrence matrix textures generated from the ultra-fine-beam C-HH SAR data were found to be more efficient than the corresponding PolSAR textures for identifying urban areas from rural areas. An object-based and pixel-based fusion approach that uses ultra-fine-beam C-HH SAR texture data with PolSAR data was developed. In contrast to many other fusion approaches that have explored pixel-based classification results to improve object-based classifications, the proposed rule-based fusion approach using the object features and contextual information was able to extract several low backscatter classes such as roads, streets and parks with reasonable accuracy. / <p>QC 20121112</p>
|
97 |
Algorithms for Transcriptome Quantification and Reconstruction from RNA-Seq DataMangul, Serghei 16 November 2012 (has links)
Massively parallel whole transcriptome sequencing and its ability to generate full transcriptome data at the single transcript level provides a powerful tool with multiple interrelated applications, including transcriptome reconstruction, gene/isoform expression estimation, also known as transcriptome quantification. As a result, whole transcriptome sequencing has become the technology of choice for performing transcriptome analysis, rapidly replacing array-based technologies. The most commonly used transcriptome sequencing protocol, referred to as RNA-Seq, generates short (single or paired) sequencing tags from the ends of randomly generated cDNA fragments. RNA-Seq protocol reduces the sequencing cost and significantly increases data throughput, but is computationally challenging to reconstruct full-length transcripts and accurately estimate their abundances across all cell types.
We focus on two main problems in transcriptome data analysis, namely, transcriptome reconstruction and quantification. Transcriptome reconstruction, also referred to as novel isoform discovery, is the problem of reconstructing the transcript sequences from the sequencing data. Reconstruction can be done de novo or it can be assisted by existing genome and transcriptome annotations. Transcriptome quantification refers to the problem of estimating the expression level of each transcript. We present a genome-guided and annotation-guided transcriptome reconstruction methods as well as methods for transcript and gene expression level estimation. Empirical results on both synthetic and real RNA-seq datasets show that the proposed methods improve transcriptome quantification and reconstruction accuracy compared to previous methods.
|
98 |
Comparison Of Missing Value Imputation Methods For Meteorological Time Series DataAslan, Sipan 01 September 2010 (has links) (PDF)
Dealing with missing data in spatio-temporal time series constitutes important branch of general missing data problem. Since the statistical properties of time-dependent data characterized by sequentiality of observations then any interruption of consecutiveness in time series will cause severe problems. In order to make reliable analyses in this case missing data must be handled cautiously without disturbing the series statistical properties, mainly as temporal and spatial dependencies.
In this study we aimed to compare several imputation methods for the appropriate completion of missing values of the spatio-temporal meteorological time series. For this purpose, several missing imputation methods are assessed on their imputation performances for artificially created missing data in monthly total precipitation and monthly mean temperature series which are obtained from the climate stations of Turkish State Meteorological Service. Artificially created missing data are estimated by using six methods. Single Arithmetic Average (SAA), Normal Ratio (NR) and NR Weighted with Correlations (NRWC) are the three simple methods used in the study. On the other hand, we used two computational intensive methods for missing data imputation which are called Multi Layer Perceptron type Neural Network (MLPNN) and Monte Carlo Markov Chain based on Expectation-Maximization Algorithm (EM-MCMC). In addition to these, we propose a modification in the EM-MCMC method in which results of simple imputation methods are used as auxiliary variables. Beside the using accuracy measure based on squared errors we proposed Correlation Dimension (CD) technique for appropriate evaluation of imputation performances which is also important subject of Nonlinear Dynamic Time Series Analysis.
|
99 |
Investigation of probabilistic principal component analysis compared to proper orthogonal decomposition methods for basis extraction and missing data estimationLee, Kyunghoon 21 May 2010 (has links)
The identification of flow characteristics and the reduction of high-dimensional simulation data have capitalized on an orthogonal basis achieved by proper orthogonal decomposition (POD), also known as principal component analysis (PCA) or the Karhunen-Loeve transform (KLT). In the realm of aerospace engineering, an orthogonal basis is versatile for diverse applications, especially associated with reduced-order modeling (ROM) as follows: a low-dimensional turbulence model, an unsteady aerodynamic model for aeroelasticity and flow control, and a steady aerodynamic model for airfoil shape design. Provided that a given data set lacks parts of its data, POD is required to adopt a least-squares formulation, leading to gappy POD, using a gappy norm that is a variant of an L2 norm dealing with only known data. Although gappy POD is originally devised to restore marred images, its application has spread to aerospace engineering for the following reason: various engineering problems can be reformulated in forms of missing data estimation to exploit gappy POD. Similar to POD, gappy POD has a broad range of applications such as optimal flow sensor placement, experimental and numerical flow data assimilation, and impaired particle image velocimetry (PIV) data restoration.
Apart from POD and gappy POD, both of which are deterministic formulations, probabilistic principal component analysis (PPCA), a probabilistic generalization of PCA, has been used in the pattern recognition field for speech recognition and in the oceanography area for empirical orthogonal functions in the presence of missing data. In formulation, PPCA presumes a linear latent variable model relating an observed variable with a latent variable that is inferred only from an observed variable through a linear mapping called factor-loading. To evaluate the maximum likelihood estimates (MLEs) of PPCA parameters such as a factor-loading, PPCA can invoke an expectation-maximization (EM) algorithm, yielding an EM algorithm for PPCA (EM-PCA). By virtue of the EM algorithm, the EM-PCA is capable of not only extracting a basis but also restoring missing data through iterations whether the given data are intact or not. Therefore, the EM-PCA can potentially substitute for both POD and gappy POD inasmuch as its accuracy and efficiency are comparable to those of POD and gappy POD. In order to examine the benefits of the EM-PCA for aerospace engineering applications, this thesis attempts to qualitatively and quantitatively scrutinize the EM-PCA alongside both POD and gappy POD using high-dimensional simulation data.
In pursuing qualitative investigations, the theoretical relationship between POD and PPCA is transparent such that the factor-loading MLE of PPCA, evaluated by the EM-PCA, pertains to an orthogonal basis obtained by POD. By contrast, the analytical connection between gappy POD and the EM-PCA is nebulous because they distinctively approximate missing data due to their antithetical formulation perspectives: gappy POD solves a least-squares problem whereas the EM-PCA relies on the expectation of the observation probability model. To juxtapose both gappy POD and the EM-PCA, this research proposes a unifying least-squares perspective that embraces the two disparate algorithms within a generalized least-squares framework. As a result, the unifying perspective reveals that both methods address similar least-squares problems; however, their formulations contain dissimilar bases and norms. Furthermore, this research delves into the ramifications of the different bases and norms that will eventually characterize the traits of both methods. To this end, two hybrid algorithms of gappy POD and the EM-PCA are devised and compared to the original algorithms for a qualitative illustration of the different basis and norm effects. After all, a norm reflecting a curve-fitting method is found to more significantly affect estimation error reduction than a basis for two example test data sets: one is absent of data only at a single snapshot and the other misses data across all the snapshots.
From a numerical performance aspect, the EM-PCA is computationally less efficient than POD for intact data since it suffers from slow convergence inherited from the EM algorithm. For incomplete data, this thesis quantitatively found that the number of data-missing snapshots predetermines whether the EM-PCA or gappy POD outperforms the other because of the computational cost of a coefficient evaluation, resulting from a norm selection. For instance, gappy POD demands laborious computational effort in proportion to the number of data-missing snapshots as a consequence of the gappy norm. In contrast, the computational cost of the EM-PCA is invariant to the number of data-missing snapshots thanks to the L2 norm. In general, the higher the number of data-missing snapshots, the wider the gap between the computational cost of gappy POD and the EM-PCA. Based on the numerical experiments reported in this thesis, the following criterion is recommended regarding the selection between gappy POD and the EM-PCA for computational efficiency: gappy POD for an incomplete data set containing a few data-missing snapshots and the EM-PCA for an incomplete data set involving multiple data-missing snapshots.
Last, the EM-PCA is applied to two aerospace applications in comparison to gappy POD as a proof of concept: one with an emphasis on basis extraction and the other with a focus on missing data reconstruction for a given incomplete data set with scattered missing data.
The first application exploits the EM-PCA to efficiently construct reduced-order models of engine deck responses obtained by the numerical propulsion system simulation (NPSS), some of whose results are absent due to failed analyses caused by numerical instability.
Model-prediction tests validate that engine performance metrics estimated by the reduced-order NPSS model exhibit considerably good agreement with those directly obtained by NPSS. Similarly, the second application illustrates that the EM-PCA is significantly more cost effective than gappy POD at repairing spurious PIV measurements obtained from acoustically-excited, bluff-body jet flow experiments. The EM-PCA reduces computational cost on factors 8 ~ 19 compared to gappy POD while generating the same restoration results as those evaluated by gappy POD. All in all, through comprehensive theoretical and numerical investigation, this research establishes that the EM-PCA is an efficient alternative to gappy POD for an incomplete data set containing missing data over an entire data set.
|
100 |
Distributed estimation in resource-constrained wireless sensor networksLi, Junlin 13 November 2008 (has links)
Wireless sensor networks (WSN) are an emerging technology with a wide range of applications including environment monitoring, security and surveillance, health care, smart homes, etc. Subject to severe resource constraints in wireless sensor networks, in this research, we address the distributed estimation of unknown parameters by studying the correlation among resource, distortion, and lifetime, which are three major concerns for WSN applications.
The objective of the proposed research is to design efficient distributed estimation algorithms for resource-constrained wireless sensor networks, where the major challenge is the integrated design of local signal processing operations and strategies for inter-sensor communication and networking so as to achieve a desirable tradeoff among resource efficiency (bandwidth and energy), system performance (estimation distortion and network lifetime), and implementation simplicity. More specifically, we address the efficient distributed estimation from the following perspectives: (i) rate-distortion perspective, where the objective is to study the rate-distortion bound for the distributed estimation and to design practical and distributed algorithms suitable for wireless sensor networks to approach the performance bound by optimally allocating the bit rate for each sensor, (ii) energy-distortion perspective, where the objective is to study the energy-distortion bound for the distributed estimation and to design practical and distributed algorithms suitable for wireless sensor networks to approach the performance bound by optimally allocating the bit rate and transmission energy for each sensor, and (iii) lifetime-distortion perspective, where the objective is to maximize the network lifetime while meeting estimation distortion requirements by jointly optimizing the source coding, source throughput and multi-hop routing. Also, energy-efficient cluster-based distributed estimation is studied, where the objective is to minimize the overall energy cost by appropriately dividing the sensor field into multiple clusters with data aggregation at cluster heads.
|
Page generated in 0.1734 seconds