• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 125
  • 20
  • 18
  • 16
  • 5
  • 3
  • 2
  • 2
  • 2
  • 1
  • 1
  • 1
  • 1
  • Tagged with
  • 214
  • 214
  • 76
  • 48
  • 43
  • 42
  • 40
  • 38
  • 35
  • 30
  • 28
  • 27
  • 24
  • 24
  • 21
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
111

Representações hierárquicas de vocábulos de línguas indígenas brasileiras: modelos baseados em mistura de Gaussianas / Hierarchical representations of words of brazilian indigenous languages: models based on Gaussian mixture

Lianet Sepúlveda Torres 08 December 2010 (has links)
Apesar da ampla diversidade de línguas indígenas no Brasil, poucas pesquisas estudam estas línguas e suas relações. Inúmeros esforços têm sido dedicados a procurar similaridades entre as palavras das línguas indígenas e classificá-las em famílias de línguas. Seguindo a classificação mais aceita das línguas indígenas do Brasil, esta pesquisa propõe comparar palavras de 10 línguas indígenas brasileiras. Para isso, considera-se que estas palavras são sinais de fala e estima-se a função de distribuição de probabilidade (PDF) de cada palavra, usando um modelo de mistura de gaussianas (GMM). A PDF foi considerada um modelo para representar as palavras. Os modelos foram comparados utilizando medidas de distância para construir estruturas hierárquicas que evidenciaram possíveis relações entre as palavras. Seguindo esta linha, a hipótese levantada nesta pesquisa é que as PDFs baseadas em GMM conseguem caracterizar as palavras das línguas indígenas, permitindo o emprego de medidas de distância entre elas para estabelecer relações entre as palavras, de forma que tais relações confirmem algumas das classificações. Os parâmetros do GMM foram calculados utilizando o algoritmo Maximização da Expectância (em inglês, Expectation Maximization (EM)). A divergência Kullback Leibler (KL) foi empregada para medir semelhança entre as PDFs. Esta divergência serve de base para estabelecer as estruturas hierárquicas que ilustram as relações entre os modelos. A estimativa da PDF, baseada em GMM foi testada com o auxílio de sinais simulados, sendo possível confirmar que os parâmetros obtidos são próximos dos originais. Foram implementadas várias medidas de distância para avaliar se a semelhança entre os modelos estavam determinadas pelos modelos e não pelas medidas adotadas neste estudo. Os resultados de todas as medidas foram similares, somente foi observada alguma diferença nos agrupamentos realizados pela distância C2, por isso foi proposta como complemento da divergência KL. Estes resultados sugerem que as relações entre os modelos dependem das suas características, não das métricas de distância selecionadas no estudo e que as PDFs baseadas em GMM, conseguem fazer uma caracterização adequada das palavras. Em geral, foram observados agrupamentos entre palavras que pertenciam a línguas de um mesmo tronco linguístico, assim como se observou uma tendência a incluir línguas isoladas nos agrupamentos dos troncos linguísticos. Palavras que pertenciam a determinada língua apresentaram um comportamento padrão, sendo identificadas por esse tipo de comportamento. Embora os resultados para as palavras das línguas indígenas sejam inconclusivos, considera-se que o estudo foi útil para aumentar o conhecimento destas 10 línguas estudadas, propondo novas linhas de pesquisas dedicadas à análise destas palavras. / Although there exists a large diversity of indigenous languages in Brazil, there are few researches on these languages and their relationships. Numerous efforts have been dedicated to search for similarities among words of indigenous languages to classify them into families. Following the most accepted classification of Brazilian indigenous languages, this research proposes to compare words of 10 Brazilian indigenous languages. The words of the indigenous languages are considered speech signals and the Probability Distribution Function (PDF) of each word was estimated using the Gaussian Mixture Models (GMM). This estimation was considered a model to represent each word. The models were compared using distance measures to construct hierarchical structures that illustrate possible relationships among words. The hypothesis in this research is that the estimation of the PDF, based on GMM can characterize the words of indigenous languages, allowing the use of distance measures between the PDFs to establish relationships among the words and confirm some of the classifications. The Expectation Maximization algorithm (EM) was implemented to estimate the parameters that describe the GMM. The Kullback Leibler (KL) divergence was used to measure similarities between two PDFs. This divergence is the basis to establish the hierarchical structures that show the relationships among the models. The PDF estimation, based on GMM was tested using simulated signals, allowing confirming the useful approximation of the original parameters. Several distance measures were implemented to prove that the similarities among the models depended on the model of each word, and not on the distance measure adopted in this study. The results of all measures were similar, however, as the clustering results of the C2 distances showed some differences from the other clusters, C2 distance was proposed to complement the KL divergence. The results suggest that the relationships between models depend on their characteristics, and not on the distance measures selected in this study, and the PDFs based on GMM can properly characterize the words. In general, relations among languages that belong to the same linguistic branch were illustrated, showing a tendency to include isolated languages in groups of languages that belong to the same linguistic branches. As the GMM of some language families presents a standard behavior, it allows identifying each family. Although the results of the words of indigenous languages are inconclusive, this study is considered very useful to increase the knowledge of these types of languages and to propose new research lines directed to analyze this type of signals.
112

Statistical modelling of data from performance of broiler chickens / Modelagem estatística de dados de desempenho de frangos de corte

Reginaldo Francisco Hilario 30 August 2018 (has links)
Experiments with broiler chickens are common today, because due to the great market demand for chicken meat, the need to improve the factors related to the production of broiler chicken has arisen. Many studies have been done to improve handling techniques. In these studies statistical analysis methods and techniques are employed. In studies with comparisons between treatments, it is not uncommon to observe a lack of significant effect even when there is evidence to indicate the significance of the effects. In order to avoid such eventualities it is fundamental to carry out a good planning before conducting the experiment. In this context, a study of the power of the F test was made emphasizing the relationships between test power, sample size, mean difference to be detected and variance for chicken weights data. In the analysis of data from experiments with broilers with mixed sexes and that the experimental unit is the box, generally the models used do not take into account the variability between the sexes of the birds, this affects the precision of the inference on the population of interest . We propose a model for the total weight per box that takes into account the sex information of the broiler chickens. / Experimentos com frangos de corte são comuns atualmente, pois devido à grande demanda de mercado da carne de frango surgiu a necessidade de melhorar os fatores ligados à produção do frango de corte. Muitos estudos têm sido feitos para aprimorar as técnicas de manejo. Nesses estudos os métodos e técnicas estatísticas de análise são empregados. Em estudos com comparações entre tratamentos, não é incomum observar falta de efeito significativo mesmo quando existem evidências que apontam a significância dos efeitos. Para evitar tais eventualidades é fundamental realizar um bom planejamento antes da condução do experimento. Nesse contexto, foi feito um estudo do poder do teste F enfatizando as relações entre o poder do teste, tamanho da amostra, diferença média a ser detectada e variância para dados de pesos de frangos. Na análise de dados provenientes de experimentos com frangos de corte com ambos os sexos e que a unidade experimental é o boxe, geralmente os modelos utilizados não levam em conta a variabilidade entre os sexos das aves, isso afeta a precisão da inferência sobre a população de interesse. Foi proposto um modelo para o peso total por boxe que leva em conta a informação do sexo dos frangos.
113

Bayesian non-parametric parsimonious mixtures for model-based clustering / Modèles de mélanges Bayésiens non-paramétriques parcimonieux pour la classification automatique

Bartcus, Marius 26 October 2015 (has links)
Cette thèse porte sur l’apprentissage statistique et l’analyse de données multi-dimensionnelles. Elle se focalise particulièrement sur l’apprentissage non supervisé de modèles génératifs pour la classification automatique. Nous étudions les modèles de mélanges Gaussians, aussi bien dans le contexte d’estimation par maximum de vraisemblance via l’algorithme EM, que dans le contexte Bayésien d’estimation par Maximum A Posteriori via des techniques d’échantillonnage par Monte Carlo. Nous considérons principalement les modèles de mélange parcimonieux qui reposent sur une décomposition spectrale de la matrice de covariance et qui offre un cadre flexible notamment pour les problèmes de classification en grande dimension. Ensuite, nous investiguons les mélanges Bayésiens non-paramétriques qui se basent sur des processus généraux flexibles comme le processus de Dirichlet et le Processus du Restaurant Chinois. Cette formulation non-paramétrique des modèles est pertinente aussi bien pour l’apprentissage du modèle, que pour la question difficile du choix de modèle. Nous proposons de nouveaux modèles de mélanges Bayésiens non-paramétriques parcimonieux et dérivons une technique d’échantillonnage par Monte Carlo dans laquelle le modèle de mélange et son nombre de composantes sont appris simultanément à partir des données. La sélection de la structure du modèle est effectuée en utilisant le facteur de Bayes. Ces modèles, par leur formulation non-paramétrique et parcimonieuse, sont utiles pour les problèmes d’analyse de masses de données lorsque le nombre de classe est indéterminé et augmente avec les données, et lorsque la dimension est grande. Les modèles proposés validés sur des données simulées et des jeux de données réelles standard. Ensuite, ils sont appliqués sur un problème réel difficile de structuration automatique de données bioacoustiques complexes issues de signaux de chant de baleine. Enfin, nous ouvrons des perspectives Markoviennes via les processus de Dirichlet hiérarchiques pour les modèles Markov cachés. / This thesis focuses on statistical learning and multi-dimensional data analysis. It particularly focuses on unsupervised learning of generative models for model-based clustering. We study the Gaussians mixture models, in the context of maximum likelihood estimation via the EM algorithm, as well as in the Bayesian estimation context by maximum a posteriori via Markov Chain Monte Carlo (MCMC) sampling techniques. We mainly consider the parsimonious mixture models which are based on a spectral decomposition of the covariance matrix and provide a flexible framework particularly for the analysis of high-dimensional data. Then, we investigate non-parametric Bayesian mixtures which are based on general flexible processes such as the Dirichlet process and the Chinese Restaurant Process. This non-parametric model formulation is relevant for both learning the model, as well for dealing with the issue of model selection. We propose new Bayesian non-parametric parsimonious mixtures and derive a MCMC sampling technique where the mixture model and the number of mixture components are simultaneously learned from the data. The selection of the model structure is performed by using Bayes Factors. These models, by their non-parametric and sparse formulation, are useful for the analysis of large data sets when the number of classes is undetermined and increases with the data, and when the dimension is high. The models are validated on simulated data and standard real data sets. Then, they are applied to a real difficult problem of automatic structuring of complex bioacoustic data issued from whale song signals. Finally, we open Markovian perspectives via hierarchical Dirichlet processes hidden Markov models.
114

Algorithmic Trading : Hidden Markov Models on Foreign Exchange Data

Idvall, Patrik, Jonsson, Conny January 2008 (has links)
In this master's thesis, hidden Markov models (HMM) are evaluated as a tool for forecasting movements in a currency cross. With an ever increasing electronic market, making way for more automated trading, or so called algorithmic trading, there is constantly a need for new trading strategies trying to find alpha, the excess return, in the market. HMMs are based on the well-known theories of Markov chains, but where the states are assumed hidden, governing some observable output. HMMs have mainly been used for speech recognition and communication systems, but have lately also been utilized on financial time series with encouraging results. Both discrete and continuous versions of the model will be tested, as well as single- and multivariate input data. In addition to the basic framework, two extensions are implemented in the belief that they will further improve the prediction capabilities of the HMM. The first is a Gaussian mixture model (GMM), where one for each state assign a set of single Gaussians that are weighted together to replicate the density function of the stochastic process. This opens up for modeling non-normal distributions, which is often assumed for foreign exchange data. The second is an exponentially weighted expectation maximization (EWEM) algorithm, which takes time attenuation in consideration when re-estimating the parameters of the model. This allows for keeping old trends in mind while more recent patterns at the same time are given more attention. Empirical results shows that the HMM using continuous emission probabilities can, for some model settings, generate acceptable returns with Sharpe ratios well over one, whilst the discrete in general performs poorly. The GMM therefore seems to be an highly needed complement to the HMM for functionality. The EWEM however does not improve results as one might have expected. Our general impression is that the predictor using HMMs that we have developed and tested is too unstable to be taken in as a trading tool on foreign exchange data, with too many factors influencing the results. More research and development is called for.
115

Differences and similarities in work absence behavior : - empirical evidence from micro data

Nilsson, Maria January 2005 (has links)
This thesis consists of three self-contained essays about absenteeism. Essay I analyzes if the design of the insurance system affects work absence, i.e. the classic insurance problem of moral hazard. Several reforms of the sickness insurance system were implemented during the period 1991-1996. Using Negative binomial models with fixed effects, the analysis show that both workers and employers changed their behavior due to the reforms. We also find that the extent of moral hazard varies depending on work contract structures. The reforms reducing the compensation levels decreased workers’ absence, both the number of absent days and the number of absence spells. The reform in 1992, introducing sick pay paid by the employers, also decreased absence levels, which probably can be explained by changes in personnel policy such as increased use of monitoring and screening of workers. Essay II examines the background to gender differences in work absence. Women are found, as in many earlier studies, to have higher absence levels than men. Our analysis, using finite mixture models, reveals that there are a group of women, comprised of about 41% of the women in our sample, that have a high average demand of absence. Among men, the high demand group is smaller consisting of about 36% of the male sample. The absence behavior differs as much between groups within gender as it does between men and women. The access to panel data covering the period 1971-1991 enables an analysis of the increased gender gap over time. Our analysis shows that the increased gender gap can be attributed to changes in behavior rather than in observable characteristics. Essay III analyzes the difference in work absence between natives and immigrants. Immigrants are found to have higher absence than natives when measured as the number of absent days. For the number of absence spells, the pattern for immigrants and natives is about the same. The analysis, using panel data and count data models, show that natives and immigrants have different characteristics concerning family situation, work conditions and health. We also find that natives and immigrants respond differently to these characteristics. We find, for example, that the absence of natives and immigrants are differently related to both economic incentives and work environment. Finally, our analysis shows that differences in work conditions and work environment only can explain a minor part of the ethnic differences in absence during the 1980’s.
116

Algorithmic Trading : Hidden Markov Models on Foreign Exchange Data

Idvall, Patrik, Jonsson, Conny January 2008 (has links)
<p>In this master's thesis, hidden Markov models (HMM) are evaluated as a tool for forecasting movements in a currency cross. With an ever increasing electronic market, making way for more automated trading, or so called algorithmic trading, there is constantly a need for new trading strategies trying to find alpha, the excess return, in the market.</p><p>HMMs are based on the well-known theories of Markov chains, but where the states are assumed hidden, governing some observable output. HMMs have mainly been used for speech recognition and communication systems, but have lately also been utilized on financial time series with encouraging results. Both discrete and continuous versions of the model will be tested, as well as single- and multivariate input data.</p><p>In addition to the basic framework, two extensions are implemented in the belief that they will further improve the prediction capabilities of the HMM. The first is a Gaussian mixture model (GMM), where one for each state assign a set of single Gaussians that are weighted together to replicate the density function of the stochastic process. This opens up for modeling non-normal distributions, which is often assumed for foreign exchange data. The second is an exponentially weighted expectation maximization (EWEM) algorithm, which takes time attenuation in consideration when re-estimating the parameters of the model. This allows for keeping old trends in mind while more recent patterns at the same time are given more attention.</p><p>Empirical results shows that the HMM using continuous emission probabilities can, for some model settings, generate acceptable returns with Sharpe ratios well over one, whilst the discrete in general performs poorly. The GMM therefore seems to be an highly needed complement to the HMM for functionality. The EWEM however does not improve results as one might have expected. Our general impression is that the predictor using HMMs that we have developed and tested is too unstable to be taken in as a trading tool on foreign exchange data, with too many factors influencing the results. More research and development is called for.</p>
117

Blind Estimation of Perceptual Quality for Modern Speech Communications

Falk, Tiago 05 January 2009 (has links)
Modern speech communication technologies expose users to perceptual quality degradations that were not experienced earlier with conventional telephone systems. Since perceived speech quality is a major contributor to the end user's perception of quality of service, speech quality estimation has become an important research field. In this dissertation, perceptual quality estimators are proposed for several emerging speech communication applications, in particular for i) wireless communications with noise suppression capabilities, ii) wireless-VoIP communications, iii) far-field hands-free speech communications, and iv) text-to-speech systems. First, a general-purpose speech quality estimator is proposed based on statistical models of normative speech behaviour and on innovative techniques to detect multiple signal distortions. The estimators do not depend on a clean reference signal hence are termed ``blind." Quality meters are then distributed along the network chain to allow for both quality degradations and quality enhancements to be handled. In order to improve estimation performance for wireless communications, statistical models of noise-suppressed speech are also incorporated. Next, a hybrid signal-and-link-parametric quality estimation paradigm is proposed for emerging wireless-VoIP communications. The algorithm uses VoIP connection parameters to estimate a base quality representative of the packet switching network. Signal-based distortions are then detected and quantified in order to adjust the base quality accordingly. The proposed hybrid methodology is shown to overcome the limitations of existing pure signal-based and pure link parametric algorithms. Temporal dynamics information is then investigated for quality diagnosis for hands-free speech communications. A spectro-temporal signal representation, where speech and reverberation tail components are shown to be separable, is used for blind characterization of room acoustics. In particular, estimators of reverberation time, direct-to-reverberation energy ratio, and reverberant speech quality are developed. Lastly, perceptual quality estimation for text-to-speech systems is addressed. Text- and speaker-independent hidden Markov models, trained on naturally produced speech, are used to capture normative spectral-temporal information. Deviations from the models, computed by means of a log-likelihood measure, are shown to be reliable indicators of multiple quality attributes including naturalness, fluency, and intelligibility. / Thesis (Ph.D, Electrical & Computer Engineering) -- Queen's University, 2008-12-22 14:54:49.28
118

Models and estimation algorithms for nonparametric finite mixtures with conditionally independent multivariate component densities / Modèles et algorithmes d'estimation pour des mélanges finis de densités de composantes multivariées non paramétriques et conditionnellement indépendantes

Hoang, Vy-Thuy-Lynh 20 April 2017 (has links)
Plusieurs auteurs ont proposé récemment des modèles et des algorithmes pour l'estimation nonparamétrique de mélanges multivariés finis dont l'identifiabilité n'est pas toujours assurée. Entre les modèles considérés, l'hypothèse des coordonnées indépendantes conditionnelles à la sous-population de provenance des individus fait l'objet d'une attention croissante, en raison des développements théoriques et pratiques envisageables, particulièrement avec la multiplicité des variables qui entrent en jeu dans le framework statistique moderne. Dans ce travail, nous considérons d'abord un modèle plus général supposant l'indépendance, conditionnellement à la composante, de blocs multivariés de coordonnées au lieu de coordonnées univariées, permettant toute structure de dépendance à l'intérieur de ces blocs. Par conséquent, les fonctions de densité des blocs sont complètement multivariées et non paramétriques. Nous présentons des arguments d'identifiabilité et introduisons pour l'estimation dans ce modèle deux algorithmes méthodologiques dont les procédures de calcul ressemblent à un véritable algorithme EM mais incluent une étape additionnelle d'estimation de densité: un algorithme rapide montrant l'efficacité empirique sans justification théorique, et un algorithme lissé possédant une propriété de monotonie comme certain algorithme EM, mais plus exigeant en terme de calcul. Nous discutons également les méthodes efficaces en temps de calcul pour l'estimation et proposons quelques stratégies. Ensuite, nous considérons une extension multivariée des modèles de mélange utilisés dans le cadre de tests d'hypothèses multiples, permettant une nouvelle version multivariée de contrôle du False Discovery Rate. Nous proposons une version contrainte de notre algorithme précédent, adaptée spécialement à ce modèle. Le comportement des algorithmes de type EM que nous proposons est étudié numériquement dans plusieurs expérimentations de Monte Carlo et sur des données réelles de grande dimension et comparé avec les méthodes existantes dans la littérature. En n, les codes de nos nouveaux algorithmes sont progressivement ajoutés sous forme de nouvelles fonctions dans le package en libre accès mixtools pour le logiciel de statistique R. / Recently several authors have proposed models and estimation algorithms for finite nonparametric multivariate mixtures, whose identifiability is typically not obvious. Among the considered models, the assumption of independent coordinates conditional on the subpopulation from which each observation is drawn is subject of an increasing attention, in view of the theoretical and practical developments it allows, particularly with multiplicity of variables coming into play in the modern statistical framework. In this work we first consider a more general model assuming independence, conditional on the component, of multivariate blocks of coordinates instead of univariate coordinates, allowing for any dependence structure within these blocks. Consequently, the density functions of these blocks are completely multivariate and nonparametric. We present identifiability arguments and introduce for estimation in this model two methodological algorithms whose computational procedures resemble a true EM algorithm but include an additional density estimation step: a fast algorithm showing empirical efficiency without theoretical justification, and a smoothed algorithm possessing a monotony property as any EM algorithm does, but more computationally demanding. We also discuss computationally efficient methods for estimation and derive some strategies. Next, we consider a multivariate extension of the mixture models used in the framework of multiple hypothesis testings, allowing for a new multivariate version of the False Discovery Rate control. We propose a constrained version of our previous algorithm, specifically designed for this model. The behavior of the EM-type algorithms we propose is studied numerically through several Monte Carlo experiments and high dimensional real data, and compared with existing methods in the literature. Finally, the codes of our new algorithms are progressively implemented as new functions in the publicly-available package mixtools for the R statistical software.
119

Probabilistic incremental learning for image recognition : modelling the density of high-dimensional data

Carvalho, Edigleison Francelino January 2014 (has links)
Atualmente diversos sistemas sensoriais fornecem dados em fluxos e essas observações medidas são frequentemente de alta dimensionalidade, ou seja, o número de variáveis medidas é grande, e as observações chegam em sequência. Este é, em particular, o caso de sistemas de visão em robôs. Aprendizagem supervisionada e não-supervisionada com esses fluxos de dados é um desafio, porque o algoritmo deve ser capaz de aprender com cada observação e depois descartá-la antes de considerar a próxima, mas diversos métodos requerem todo o conjunto de dados a fim de estimar seus parâmetros e, portanto, não são adequados para aprendizagem em tempo real. Além disso, muitas abordagens sofrem com a denominada maldição da dimensionalidade (BELLMAN, 1961) e não conseguem lidar com dados de entrada de alta dimensionalidade. Para superar os problemas descritos anteriormente, este trabalho propõe um novo modelo de rede neural probabilístico e incremental, denominado Local Projection Incremental Gaussian Mixture Network (LP-IGMN), que é capaz de realizar aprendizagem perpétua com dados de alta dimensionalidade, ou seja, ele pode aprender continuamente considerando a estabilidade dos parâmetros do modelo atual e automaticamente ajustar sua topologia levando em conta a fronteira do subespaço encontrado por cada neurônio oculto. O método proposto pode encontrar o subespaço intrísico onde os dados se localizam, o qual é denominado de subespaço principal. Ortogonal ao subespaço principal, existem as dimensões que são ruidosas ou que carregam pouca informação, ou seja, com pouca variância, e elas são descritas por um único parâmetro estimado. Portanto, LP-IGMN é robusta a diferentes fontes de dados e pode lidar com grande número de variáveis ruidosas e/ou irrelevantes nos dados medidos. Para avaliar a LP-IGMN nós realizamos diversos experimentos usando conjunto de dados simulados e reais. Demonstramos ainda diversas aplicações do nosso método em tarefas de reconhecimento de imagens. Os resultados mostraram que o desempenho da LP-IGMN é competitivo, e geralmente superior, com outras abordagens do estado da arte, e que ela pode ser utilizada com sucesso em aplicações que requerem aprendizagem perpétua em espaços de alta dimensionalidade. / Nowadays several sensory systems provide data in ows and these measured observations are frequently high-dimensional, i.e., the number of measured variables is large, and the observations are arriving in a sequence. This is in particular the case of robot vision systems. Unsupervised and supervised learning with such data streams is challenging, because the algorithm should be capable of learning from each observation and then discard it before considering the next one, but several methods require the whole dataset in order to estimate their parameters and, therefore, are not suitable for online learning. Furthermore, many approaches su er with the so called curse of dimensionality (BELLMAN, 1961) and can not handle high-dimensional input data. To overcome the problems described above, this work proposes a new probabilistic and incremental neural network model, called Local Projection Incremental Gaussian Mixture Network (LP-IGMN), which is capable to perform life-long learning with high-dimensional data, i.e., it can continuously learn considering the stability of the current model's parameters and automatically adjust its topology taking into account the subspace's boundary found by each hidden neuron. The proposed method can nd the intrinsic subspace where the data lie, which is called the principal subspace. Orthogonal to the principal subspace, there are the dimensions that are noisy or carry little information, i.e., with small variance, and they are described by a single estimated parameter. Therefore, LP-IGMN is robust to di erent sources of data and can deal with large number of noise and/or irrelevant variables in the measured data. To evaluate LP-IGMN we conducted several experiments using simulated and real datasets. We also demonstrated several applications of our method in image recognition tasks. The results have shown that the LP-IGMN performance is competitive, and usually superior, with other stateof- the-art approaches, and it can be successfully used in applications that require life-long learning in high-dimensional spaces.
120

Small Blob Detection in Medical Images

January 2015 (has links)
abstract: Recent advances in medical imaging technology have greatly enhanced imaging based diagnosis which requires computational effective and accurate algorithms to process the images (e.g., measure the objects) for quantitative assessment. In this dissertation, one type of imaging objects is of interest: small blobs. Example small blob objects are cells in histopathology images, small breast lesions in ultrasound images, glomeruli in kidney MR images etc. This problem is particularly challenging because the small blobs often have inhomogeneous intensity distribution and indistinct boundary against the background. This research develops a generalized four-phased system for small blob detections. The system includes (1) raw image transformation, (2) Hessian pre-segmentation, (3) feature extraction and (4) unsupervised clustering for post-pruning. First, detecting blobs from 2D images is studied where a Hessian-based Laplacian of Gaussian (HLoG) detector is proposed. Using the scale space theory as foundation, the image is smoothed via LoG. Hessian analysis is then launched to identify the single optimal scale based on which a pre-segmentation is conducted. Novel Regional features are extracted from pre-segmented blob candidates and fed to Variational Bayesian Gaussian Mixture Models (VBGMM) for post pruning. Sixteen cell histology images and two hundred cell fluorescent images are tested to demonstrate the performances of HLoG. Next, as an extension, Hessian-based Difference of Gaussians (HDoG) is proposed which is capable to identify the small blobs from 3D images. Specifically, kidney glomeruli segmentation from 3D MRI (6 rats, 3 humans) is investigated. The experimental results show that HDoG has the potential to automatically detect glomeruli, enabling new measurements of renal microstructures and pathology in preclinical and clinical studies. Realizing the computation time is a key factor impacting the clinical adoption, the last phase of this research is to investigate the data reduction technique for VBGMM in HDoG to handle large-scale datasets. A new coreset algorithm is developed for variational Bayesian mixture models. Using the same MRI dataset, it is observed that the four-phased system with coreset-VBGMM has similar performance as using the full dataset but about 20 times faster. / Dissertation/Thesis / Doctoral Dissertation Industrial Engineering 2015

Page generated in 0.0525 seconds