• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 125
  • 20
  • 18
  • 16
  • 5
  • 3
  • 2
  • 2
  • 2
  • 1
  • 1
  • 1
  • 1
  • Tagged with
  • 214
  • 214
  • 76
  • 48
  • 43
  • 42
  • 40
  • 38
  • 35
  • 30
  • 28
  • 27
  • 24
  • 24
  • 21
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
51

Modélisation probabiliste des courbes S-N / Probabilistic modelling of S-N curves

Fouchereau, Rémy 01 April 2014 (has links)
La courbe S-N est le moyen le plus courant d'analyse et de prédiction de la durée de vie d'un matériau, d'un composant ou d'une structure. Cependant, les modèles standards, qu'ils soient basés sur la théorie de la rupture ou sur des modèles probabilistes n'ajustent pas la courbe dans la totalité sans information sur la microstructure du matériau. Or, cette information provient d'analyses fractographiques souvent coûteuses et rarement disponibles dans le cadre d'une production industrielle. D'un autre côté, les modèles statistiques ne proposent pas d'interprétation matériau et ne peuvent pas être utilisées pour réaliser des prévisions. Les résultats d'un test de fatigue sont par ailleurs très dispersés, plus particulièrement pour les fortes durées de vie, lieu d'apparition d'un phénomène de bi-modalité. Ces constats sont la raison de la proposition d'un nouveau modèle probabiliste. Celui-ci est composé d'un modèle de mélange spécifique, prenant en compte l'approche apportée par la mécanique de la rupture sans nécessiter de d'information supplémentaire sur la microstructure du matériau. Il utilise le fait que la fatigue peut être vue comme la somme d'un amorçage de fissure suivi de sa propagation. Les paramètres du modèle sont estimés à l'aide d'un algorithme EM, où la phase de maximisation combine une méthode d'optimisation de Newton-Raphson et une intégration de type Monte-Carlo. Le modèle "amorçage-propagation" offre une représentation parcimonieuse des courbes $S-N$ dont les paramètres peuvent être facilement interprétés par des ingénieurs matériau. Ce modèle a été testé à l'aide de simulations et appliqué à des données réelles (données sur l'Inconel 718). Ceci nous a permis de mettre en évidence le bon ajustement du modèle à nos données, et ce, pour toutes les déformations disponibles. / S-N curve is the main tool to analyze and predict fatigue lifetime of a material, component or structure. But, standard models based on mechanic of rupture theory or standard probabilistic models for analyzing S-N curves could not fit S-N curve on the whole range of cycles without microstructure information. This information is obtained from costly fractography investigation rarely available in the framework of industrial production. On the other hand, statistical models for fatigue lifetime do not need microstructure information but they could not be used to service life predictions because they have no material interpretation. Moreover, fatigue test results are widely scattered, especially for High Cycle Fatigue region where split $S-N$ curves appear. This is the motivation to propose a new probabilistic model. This model is a specific mixture model based on a fracture mechanic approach, and does not require microstructure information. It makes use of the fact that the fatigue lifetime can be regarded as the sum of the crack initiation and propagation lifes. The model parameters are estimated with an EM algorithm for which the maximisation step combines Newton-Raphson optimisation method and Monte Carlo integrations. The resulting model provides a parsimonious representation of S-N curves with parameters easily interpreted by mechanic or material engineers. This model has been applied to simulated and real fatigue test data sets. These numerical experiments highlight its ability to produce a good fit of the S-N curves on the whole range of cycles.
52

Seleção de modelos através de um teste de hipótese genuinamente Bayesiano: misturas de normais multivariadas e hipóteses separadas / Model selection by a genuinely Bayesian significance test: Multivariate normal mixtures and separated hypotheses

Lauretto, Marcelo de Souza 03 October 2007 (has links)
Nesta tese propomos o Full Bayesian Significance Test (FBST), apresentado por Pereira e Stern em 1999, para análise de modelos de misturas de normais multivariadas. Estendemos o conceito de modelos de misturas para explorar outro problema clássico em Estatística, o problema de modelos separados. Nas duas propostas, realizamos experimentos numéricos inspirados em problemas biológicos importantes: o problema de classificação não supervisionada de genes baseada em seus níveis de expressão, e o problema da discriminação entre os modelos Weibull e Gompertz - distribuições clássicas em análise de sobrevivência. / In this thesis we propose the Full Bayesian Significance Test (FBST) as a tool for multivariate normal mixture models. We extend the fundamental mixture concepts to another important problem in Statistics, the problem of separate models. In both methods, we perform numerical experiments based on important biological problems: the unsupervised classification of genes based on their expression profiles, and the problem of deciding between the Weibull and Gompertz models - two classical distributions widely used in survival analysis.
53

An incremental gaussian mixture network for data stream classification in non-stationary environments / Uma rede de mistura de gaussianas incrementais para classificação de fluxos contínuos de dados em cenários não estacionários

Diaz, Jorge Cristhian Chamby January 2018 (has links)
Classificação de fluxos contínuos de dados possui muitos desafios para a comunidade de mineração de dados quando o ambiente não é estacionário. Um dos maiores desafios para a aprendizagem em fluxos contínuos de dados está relacionado com a adaptação às mudanças de conceito, as quais ocorrem como resultado da evolução dos dados ao longo do tempo. Duas formas principais de desenvolver abordagens adaptativas são os métodos baseados em conjunto de classificadores e os algoritmos incrementais. Métodos baseados em conjunto de classificadores desempenham um papel importante devido à sua modularidade, o que proporciona uma maneira natural de se adaptar a mudanças de conceito. Os algoritmos incrementais são mais rápidos e possuem uma melhor capacidade anti-ruído do que os conjuntos de classificadores, mas têm mais restrições sobre os fluxos de dados. Assim, é um desafio combinar a flexibilidade e a adaptação de um conjunto de classificadores na presença de mudança de conceito, com a simplicidade de uso encontrada em um único classificador com aprendizado incremental. Com essa motivação, nesta dissertação, propomos um algoritmo incremental, online e probabilístico para a classificação em problemas que envolvem mudança de conceito. O algoritmo é chamado IGMN-NSE e é uma adaptação do algoritmo IGMN. As duas principais contribuições da IGMN-NSE em relação à IGMN são: melhoria de poder preditivo para tarefas de classificação e a adaptação para alcançar um bom desempenho em cenários não estacionários. Estudos extensivos em bases de dados sintéticas e do mundo real demonstram que o algoritmo proposto pode rastrear os ambientes em mudança de forma muito próxima, independentemente do tipo de mudança de conceito. / Data stream classification poses many challenges for the data mining community when the environment is non-stationary. The greatest challenge in learning classifiers from data stream relates to adaptation to the concept drifts, which occur as a result of changes in the underlying concepts. Two main ways to develop adaptive approaches are ensemble methods and incremental algorithms. Ensemble method plays an important role due to its modularity, which provides a natural way of adapting to change. Incremental algorithms are faster and have better anti-noise capacity than ensemble algorithms, but have more restrictions on concept drifting data streams. Thus, it is a challenge to combine the flexibility and adaptation of an ensemble classifier in the presence of concept drift, with the simplicity of use found in a single classifier with incremental learning. With this motivation, in this dissertation we propose an incremental, online and probabilistic algorithm for classification as an effort of tackling concept drifting. The algorithm is called IGMN-NSE and is an adaptation of the IGMN algorithm. The two main contributions of IGMN-NSE in relation to the IGMN are: predictive power improvement for classification tasks and adaptation to achieve a good performance in non-stationary environments. Extensive studies on both synthetic and real-world data demonstrate that the proposed algorithm can track the changing environments very closely, regardless of the type of concept drift.
54

Continuous reinforcement learning with incremental Gaussian mixture models / Aprendizagem por reforço contínua com modelos de mistura gaussianas incrementais

Pinto, Rafael Coimbra January 2017 (has links)
A contribução original desta tese é um novo algoritmo que integra um aproximador de funções com alta eficiência amostral com aprendizagem por reforço em espaços de estados contínuos. A pesquisa completa inclui o desenvolvimento de um algoritmo online e incremental capaz de aprender por meio de uma única passada sobre os dados. Este algoritmo, chamado de Fast Incremental Gaussian Mixture Network (FIGMN) foi empregado como um aproximador de funções eficiente para o espaço de estados de tarefas contínuas de aprendizagem por reforço, que, combinado com Q-learning linear, resulta em performance competitiva. Então, este mesmo aproximador de funções foi empregado para modelar o espaço conjunto de estados e valores Q, todos em uma única FIGMN, resultando em um algoritmo conciso e com alta eficiência amostral, i.e., um algoritmo de aprendizagem por reforço capaz de aprender por meio de pouquíssimas interações com o ambiente. Um único episódio é suficiente para aprender as tarefas investigadas na maioria dos experimentos. Os resultados são analisados a fim de explicar as propriedades do algoritmo obtido, e é observado que o uso da FIGMN como aproximador de funções oferece algumas importantes vantagens para aprendizagem por reforço em relação a redes neurais convencionais. / This thesis’ original contribution is a novel algorithm which integrates a data-efficient function approximator with reinforcement learning in continuous state spaces. The complete research includes the development of a scalable online and incremental algorithm capable of learning from a single pass through data. This algorithm, called Fast Incremental Gaussian Mixture Network (FIGMN), was employed as a sample-efficient function approximator for the state space of continuous reinforcement learning tasks, which, combined with linear Q-learning, results in competitive performance. Then, this same function approximator was employed to model the joint state and Q-values space, all in a single FIGMN, resulting in a concise and data-efficient algorithm, i.e., a reinforcement learning algorithm that learns from very few interactions with the environment. A single episode is enough to learn the investigated tasks in most trials. Results are analysed in order to explain the properties of the obtained algorithm, and it is observed that the use of the FIGMN function approximator brings some important advantages to reinforcement learning in relation to conventional neural networks.
55

It Is Better to Be Upside Than Sharpe!

DApuzzo, Daniele 01 April 2017 (has links)
Based on the assumption that returns in Commercial Real Estate are normally distributed, the Sharpe Ratio has been the standard risk-adjusted performance measure for the past several years. Research has questioned whether this assumption can be reasonably made. The Upside Potential Ratio as a risk-adjusted performance measure is an alternative to measure performance on a risk-adjusted basis but its values differ from the Sharpe Ratio's only in the assumption of skewed returns. We will provide reasonable evidence that CRE returns should not be fitted with a normal distribution and present the Gaussian Mixture Model as our choice of distribution to fit skewness. We will then use a GMM distribution to measure performance of CRE domestic markets via UPR. Additional insights will be presented by introducing an alternative risk-adjusted perfomance measure that we will call D-ratio. We will show how the UPR and the D-ratio can provide a tool-box that can be added to any existing investment strategy when identifying markets' past performance and timing of entrance. The intent of this thesis is not to provide a comprehensive framework for CRE investment decisions but to introduce statistical and mathematical tools that can serve any portfolio manager in augmenting any investment strategy already in place.
56

Asymptotic methods for tests of homogeneity for finite mixture models

Stewart, Michael Ian January 2002 (has links)
We present limit theory for tests of homogeneity for finite mixture models. More specifically, we derive the asymptotic distribution of certain random quantities used for testing that a mixture of two distributions is in fact just a single distribution. Our methods apply to cases where the mixture component distributions come from one of a wide class of one-parameter exponential families, both continous and discrete. We consider two random quantities, one related to testing simple hypotheses, the other composite hypotheses. For simple hypotheses we consider the maximum of the standardised score process, which is itself a test statistic. For composite hypotheses we consider the maximum of the efficient score process, which is itself not a statistic (it depends on the unknown true distribution) but is asymptotically equivalent to certain common test statistics in a certain sense. We show that we can approximate both quantities with the maximum of a certain Gaussian process depending on the sample size and the true distribution of the observations, which when suitably normalised has a limiting distribution of the Gumbel extreme value type. Although the limit theory is not practically useful for computing approximate p-values, we use Monte-Carlo simulations to show that another method suggested by the theory, involving using a Studentised version of the maximum-score statistic and simulating a Gaussian process to compute approximate p-values, is remarkably accurate and uses a fraction of the computing resources that a straight Monte-Carlo approximation would.
57

Foreground Segmentation of Moving Objects

Molin, Joel January 2010 (has links)
<p>Foreground segmentation is a common first step in tracking and surveillance applications.  The purpose of foreground segmentation is to provide later stages of image processing with an indication of where interesting data can be found.  This thesis is an investigation of how foreground segmentation can be performed in two contexts: as a pre-step to trajectory tracking and as a pre-step in indoor surveillance applications.</p><p>Three methods are selected and detailed: a single Gaussian method, a Gaussian mixture model method, and a codebook method.  Experiments are then performed on typical input video using the methods.  It is concluded that the Gaussian mixture model produces the output which yields the best trajectories when used as input to the trajectory tracker.  An extension is proposed to the Gaussian mixture model which reduces shadow, improving the performance of foreground segmentation in the surveillance context.</p>
58

Statistical Background Models with Shadow Detection for Video Based Tracking

Wood, John January 2007 (has links)
<p>A common problem when using background models to segment moving objects from video sequences is that objects cast shadow usually significantly differ from the background and therefore get detected as foreground. This causes several problems when extracting and labeling objects, such as object shape distortion and several objects merging together. The purpose of this thesis is to explore various possibilities to handle this problem.</p><p>Three methods for statistical background modeling are reviewed. All methods work on a per pixel basis, the first is based on approximating the median, the next on using Gaussian mixture models, and the last one is based on channel representation. It is concluded that all methods detect cast shadows as foreground.</p><p>A study of existing methods to handle cast shadows has been carried out in order to gain knowledge on the subject and get ideas. A common approach is to transform the RGB-color representation into a representation that separates color into intensity and chromatic components in order to determine whether or not newly sampled pixel-values are related to the background. The color spaces HSV, IHSL, CIELAB, YCbCr, and a color model proposed in the literature (Horprasert et al.) are discussed and compared for the purpose of shadow detection. It is concluded that Horprasert's color model is the most suitable for this purpose.</p><p>The thesis ends with a proposal of a method to combine background modeling using Gaussian mixture models with shadow detection using Horprasert's color model. It is concluded that, while not perfect, such a combination can be very helpful in segmenting objects and detecting their cast shadow.</p>
59

Bayesian Regression Inference Using a Normal Mixture Model

Maldonado, Hernan 08 August 2012 (has links)
In this thesis we develop a two component mixture model to perform a Bayesian regression. We implement our model computationally using the Gibbs sampler algorithm and apply it to a dataset of differences in time measurement between two clocks. The dataset has ``good" time measurements and ``bad" time measurements that were associated with the two components of our mixture model. From our theoretical work we show that latent variables are a useful tool to implement our Bayesian normal mixture model with two components. After applying our model to the data we found that the model reasonably assigned probabilities of occurrence to the two states of the phenomenon of study; it also identified two processes with the same slope, different intercepts and different variances. / McAnulty College and Graduate School of Liberal Arts; / Computational Mathematics / MS; / Thesis;
60

Learning from Incomplete Data

Ghahramani, Zoubin, Jordan, Michael I. 24 January 1995 (has links)
Real-world learning tasks often involve high-dimensional data sets with complex patterns of missing features. In this paper we review the problem of learning from incomplete data from two statistical perspectives---the likelihood-based and the Bayesian. The goal is two-fold: to place current neural network approaches to missing data within a statistical framework, and to describe a set of algorithms, derived from the likelihood-based framework, that handle clustering, classification, and function approximation from incomplete data in a principled and efficient manner. These algorithms are based on mixture modeling and make two distinct appeals to the Expectation-Maximization (EM) principle (Dempster, Laird, and Rubin 1977)---both for the estimation of mixture components and for coping with the missing data.

Page generated in 0.0476 seconds