• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 96
  • 37
  • 26
  • 17
  • 10
  • 8
  • 7
  • 4
  • 3
  • 2
  • 1
  • 1
  • 1
  • 1
  • Tagged with
  • 227
  • 227
  • 74
  • 68
  • 68
  • 52
  • 44
  • 43
  • 40
  • 33
  • 31
  • 29
  • 27
  • 23
  • 22
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
171

Modélisation gaussienne de rang plein des mélanges audio convolutifs appliquée à la séparation de sources.

Duong, Quang-Khanh-Ngoc 15 November 2011 (has links) (PDF)
Nous considérons le problème de la séparation de mélanges audio réverbérants déterminés et sous-déterminés, c'est-à-dire l'extraction du signal de chaque source dans un mélange multicanal. Nous proposons un cadre général de modélisation gaussienne où la contribution de chaque source aux canaux du mélange dans le domaine temps-fréquence est modélisée par un vecteur aléatoire gaussien de moyenne nulle dont la covariance encode à la fois les caractéristiques spatiales et spectrales de la source. A n de mieux modéliser la réverbération, nous nous aff ranchissons de l'hypothèse classique de bande étroite menant à une covariance spatiale de rang 1 et nous calculons la borne théorique de performance atteignable avec une covariance spatiale de rang plein. Les ré- sultats expérimentaux indiquent une augmentation du rapport Signal-à-Distorsion (SDR) de 6 dB dans un environnement faiblement à très réverbérant, ce qui valide cette généralisation. Nous considérons aussi l'utilisation de représentations temps-fréquence quadratiques et de l'échelle fréquentielle auditive ERB (equivalent rectangular bandwidth) pour accroître la quantité d'information exploitable et décroître le recouvrement entre les sources dans la représentation temps-fréquence. Après cette validation théorique du cadre proposé, nous nous focalisons sur l'estimation des paramètres du modèle à partir d'un signal de mélange donné dans un scénario pratique de séparation aveugle de sources. Nous proposons une famille d'algorithmes Expectation-Maximization (EM) pour estimer les paramètres au sens du maximum de vraisemblance (ML) ou du maximum a posteriori (MAP). Nous proposons une famille d'a priori de position spatiale inspirée par la théorie de l'acoustique des salles ainsi qu'un a priori de continuité spatiale. Nous étudions aussi l'utilisation de deux a priori spectraux précédemment utilisés dans un contexte monocanal ou multicanal de rang 1: un a priori de continuité spatiale et un modèle de factorisation matricielle positive (NMF). Les résultats de séparation de sources obtenus par l'approche proposée sont comparés à plusieurs algorithmes de base et de l'état de l'art sur des mélanges simulés et sur des enregistrements réels dans des scénarios variés.
172

System Availability Maximization and Residual Life Prediction under Partial Observations

Jiang, Rui 10 January 2012 (has links)
Many real-world systems experience deterioration with usage and age, which often leads to low product quality, high production cost, and low system availability. Most previous maintenance and reliability models in the literature do not incorporate condition monitoring information for decision making, which often results in poor failure prediction for partially observable deteriorating systems. For that reason, the development of fault prediction and control scheme using condition-based maintenance techniques has received considerable attention in recent years. This research presents a new framework for predicting failures of a partially observable deteriorating system using Bayesian control techniques. A time series model is fitted to a vector observation process representing partial information about the system state. Residuals are then calculated using the fitted model, which are indicative of system deterioration. The deterioration process is modeled as a 3-state continuous-time homogeneous Markov process. States 0 and 1 are not observable, representing healthy (good) and unhealthy (warning) system operational conditions, respectively. Only the failure state 2 is assumed to be observable. Preventive maintenance can be carried out at any sampling epoch, and corrective maintenance is carried out upon system failure. The form of the optimal control policy that maximizes the long-run expected average availability per unit time has been investigated. It has been proved that a control limit policy is optimal for decision making. The model parameters have been estimated using the Expectation Maximization (EM) algorithm. The optimal Bayesian fault prediction and control scheme, considering long-run average availability maximization along with a practical statistical constraint, has been proposed and compared with the age-based replacement policy. The optimal control limit and sampling interval are calculated in the semi-Markov decision process (SMDP) framework. Another Bayesian fault prediction and control scheme has been developed based on the average run length (ARL) criterion. Comparisons with traditional control charts are provided. Formulae for the mean residual life and the distribution function of system residual life have been derived in explicit forms as functions of a posterior probability statistic. The advantage of the Bayesian model over the well-known 2-parameter Weibull model in system residual life prediction is shown. The methodologies are illustrated using simulated data, real data obtained from the spectrometric analysis of oil samples collected from transmission units of heavy hauler trucks in the mining industry, and vibration data from a planetary gearbox machinery application.
173

System Availability Maximization and Residual Life Prediction under Partial Observations

Jiang, Rui 10 January 2012 (has links)
Many real-world systems experience deterioration with usage and age, which often leads to low product quality, high production cost, and low system availability. Most previous maintenance and reliability models in the literature do not incorporate condition monitoring information for decision making, which often results in poor failure prediction for partially observable deteriorating systems. For that reason, the development of fault prediction and control scheme using condition-based maintenance techniques has received considerable attention in recent years. This research presents a new framework for predicting failures of a partially observable deteriorating system using Bayesian control techniques. A time series model is fitted to a vector observation process representing partial information about the system state. Residuals are then calculated using the fitted model, which are indicative of system deterioration. The deterioration process is modeled as a 3-state continuous-time homogeneous Markov process. States 0 and 1 are not observable, representing healthy (good) and unhealthy (warning) system operational conditions, respectively. Only the failure state 2 is assumed to be observable. Preventive maintenance can be carried out at any sampling epoch, and corrective maintenance is carried out upon system failure. The form of the optimal control policy that maximizes the long-run expected average availability per unit time has been investigated. It has been proved that a control limit policy is optimal for decision making. The model parameters have been estimated using the Expectation Maximization (EM) algorithm. The optimal Bayesian fault prediction and control scheme, considering long-run average availability maximization along with a practical statistical constraint, has been proposed and compared with the age-based replacement policy. The optimal control limit and sampling interval are calculated in the semi-Markov decision process (SMDP) framework. Another Bayesian fault prediction and control scheme has been developed based on the average run length (ARL) criterion. Comparisons with traditional control charts are provided. Formulae for the mean residual life and the distribution function of system residual life have been derived in explicit forms as functions of a posterior probability statistic. The advantage of the Bayesian model over the well-known 2-parameter Weibull model in system residual life prediction is shown. The methodologies are illustrated using simulated data, real data obtained from the spectrometric analysis of oil samples collected from transmission units of heavy hauler trucks in the mining industry, and vibration data from a planetary gearbox machinery application.
174

Variational Approximations and Other Topics in Mixture Models

Dang, Sanjeena 24 August 2012 (has links)
Mixture model-based clustering has become an increasingly popular data analysis technique since its introduction almost fifty years ago. Families of mixture models are said to arise when the component parameters, usually the component covariance matrices, are decomposed and a number of constraints are imposed. Within the family setting, it is necessary to choose the member of the family --- i.e., the appropriate covariance structure --- in addition to the number of mixture components. To date, the Bayesian information criterion (BIC) has proved most effective for this model selection process, and the expectation-maximization (EM) algorithm has been predominantly used for parameter estimation. We deviate from the EM-BIC rubric, using variational Bayes approximations for parameter estimation and the deviance information criterion (DIC) for model selection. The variational Bayes approach alleviates some of the computational complexities associated with the EM algorithm. We use this approach on the most famous family of Gaussian mixture models known as Gaussian parsimonious clustering models (GPCM). These models have an eigen-decomposed covariance structure. Cluster-weighted modelling (CWM) is another flexible statistical framework for modelling local relationships in heterogeneous populations on the basis of weighted combinations of local models. In particular, we extend cluster-weighted models to include an underlying latent factor structure of the independent variable, resulting in a novel family of models known as parsimonious cluster-weighted factor analyzers. The EM-BIC rubric is utilized for parameter estimation and model selection. Some work on a mixture of multivariate t-distributions is also presented, with a linear model for the mean and a modified Cholesky-decomposed covariance structure leading to a novel family of mixture models. In addition to model-based clustering, these models are also used for model-based classification, i.e., semi-supervised clustering. Parameters are estimated using the EM algorithm and another approach to model selection other than the BIC is also considered. / NSERC PGS-D
175

Analyse bayésienne et classification pour modèles continus modifiés à zéro

Labrecque-Synnott, Félix 08 1900 (has links)
Les modèles à sur-représentation de zéros discrets et continus ont une large gamme d'applications et leurs propriétés sont bien connues. Bien qu'il existe des travaux portant sur les modèles discrets à sous-représentation de zéro et modifiés à zéro, la formulation usuelle des modèles continus à sur-représentation -- un mélange entre une densité continue et une masse de Dirac -- empêche de les généraliser afin de couvrir le cas de la sous-représentation de zéros. Une formulation alternative des modèles continus à sur-représentation de zéros, pouvant aisément être généralisée au cas de la sous-représentation, est présentée ici. L'estimation est d'abord abordée sous le paradigme classique, et plusieurs méthodes d'obtention des estimateurs du maximum de vraisemblance sont proposées. Le problème de l'estimation ponctuelle est également considéré du point de vue bayésien. Des tests d'hypothèses classiques et bayésiens visant à déterminer si des données sont à sur- ou sous-représentation de zéros sont présentées. Les méthodes d'estimation et de tests sont aussi évaluées au moyen d'études de simulation et appliquées à des données de précipitation agrégées. Les diverses méthodes s'accordent sur la sous-représentation de zéros des données, démontrant la pertinence du modèle proposé. Nous considérons ensuite la classification d'échantillons de données à sous-représentation de zéros. De telles données étant fortement non normales, il est possible de croire que les méthodes courantes de détermination du nombre de grappes s'avèrent peu performantes. Nous affirmons que la classification bayésienne, basée sur la distribution marginale des observations, tiendrait compte des particularités du modèle, ce qui se traduirait par une meilleure performance. Plusieurs méthodes de classification sont comparées au moyen d'une étude de simulation, et la méthode proposée est appliquée à des données de précipitation agrégées provenant de 28 stations de mesure en Colombie-Britannique. / Zero-inflated models, both discrete and continuous, have a large variety of applications and fairly well-known properties. Some work has been done on zero-deflated and zero-modified discrete models. The usual formulation of continuous zero-inflated models -- a mixture between a continuous density and a Dirac mass at zero -- precludes their extension to cover the zero-deflated case. We introduce an alternative formulation of zero-inflated continuous models, along with a natural extension to the zero-deflated case. Parameter estimation is first studied within the classical frequentist framework. Several methods for obtaining the maximum likelihood estimators are proposed. The problem of point estimation is considered from a Bayesian point of view. Hypothesis testing, aiming at determining whether data are zero-inflated, zero-deflated or not zero-modified, is also considered under both the classical and Bayesian paradigms. The proposed estimation and testing methods are assessed through simulation studies and applied to aggregated rainfall data. The data is shown to be zero-deflated, demonstrating the relevance of the proposed model. We next consider the clustering of samples of zero-deflated data. Such data present strong non-normality. Therefore, the usual methods for determining the number of clusters are expected to perform poorly. We argue that Bayesian clustering based on the marginal distribution of the observations would take into account the particularities of the model and exhibit better performance. Several clustering methods are compared using a simulation study. The proposed method is applied to aggregated rainfall data sampled from 28 measuring stations in British Columbia.
176

Actuarial applications of multivariate phase-type distributions : model calibration and credibility

Hassan Zadeh, Amin January 2009 (has links)
Thèse numérisée par la Division de la gestion de documents et des archives de l'Université de Montréal
177

Specification analysis of interest rates factors : an international perspective

Tiozzo Pezzoli, Luca 05 December 2013 (has links) (PDF)
The aim of this thesis is to model the dynamics of international term structure of interest rates taking into consideration several dependence channels.Thanks to a new international Treasury yield curve database, we observe that the explained variability decision criterion, suggested by the literature, is not able to select the best combination of factors characterizing the joint dynamics of yield curves. We propose a new methodology based on the maximisation of the likelihood function of a Gaussian state-space model with common and local factors. The associated identification problem is solved in an innovative way. By estimating several sets of countries, we select two global (and three local) factors which are also useful to forecast macroeconomic variables in each considered economy.In addition, our method allows us to detect hidden factors in the international bond returns. They are not visible through a classical principal component analysis of expected bond returns but they are helpful to forecast inflation and industrial production. Keywords: International treasury yield curves, common and local factors, state-space models, EM algorithm, International bond risk premia, principal components.
178

Hétérogénéité inobservée et solutions en coin dans les modèles micro-économétriques de choix de production multiculture / Unobserved Heterogeneity and Corner Solution in Micro-econometrics Multicrops Production choice models

Koutchade, Obafèmi-Philippe 19 January 2018 (has links)
Dans cette thèse, nous nous intéressons aux questions de l’hétérogénéité inobservée et des solutions en coin dans les modèles de choix d’assolements. Pour répondre à ces questions, nous nous appuyons sur un modèle de choix de production multicultures avec choix d’assolement de forme NMNL, dont nous proposons des extensions. Ces extensions conduisent à des problèmes spécifiques d’estimation, auxquels nous apportons des solutions. La question de l’hétérogénéité inobservée est traitée en considérant une spécification à paramètres aléatoires. Ceci nous permet de tenir compte des effets de l’hétérogénéité inobservée sur l’ensemble des paramètres du modèle. Nous montrons que les versions stochastiques de l’algorithme EM sont particulièrement adaptées pour estimer ce type de modèle.Nos résultats d’estimation et de simulation montrent que les agriculteurs réagissent de façon hétérogène aux incitations économiques et que ne pas tenir compte de cette hétérogénéité peut conduire à des effets simulés de politiques publique biaisés.Pour tenir compte des solutions en coin dans les choix d’assolement, nous proposons une modélisation basée sur les modèles à changement de régime endogène avec coûts fixes associés aux régimes. Contrairement aux approches basées sur des systèmes de régression censurées, notre modèle est cohérent d’un point de vue micro-économique. Nos résultats montrent que les coûts fixes associés aux régimes jouent un rôle important dans le choix des agriculteurs de produire ou non certaines cultures et qu’ils constituent, à court terme, un déterminant important des c / In this thesis, we are interested in questions of unobserved heterogeneity and corner solutions in acreage choice models. To answer these questions, we rely on a NMNL acreage share multi-crop models, of which we propose extensions. These extensions lead to specific estimation problems, to which we provide solutions.The question of unobserved heterogeneity is dealt with by considering a random parameter specification. This allows us to take into account the effects of the unobserved heterogeneity on all the parameters of the model. We show that the stochastic versions of the EM algorithm are particularly suitable for estimating this type of modelOur estimation and simulation results show that farmers react heterogeneously to economic incentives and that ignoring this heterogeneity can lead to biased simulated effects of public policies.In order to take account of the corner solutions in acreage choices, we propose modelling based on endogenous regime switching models with regime fixed costs. Unlike approaches based on censored regression systems, our model is “fully” consistent from a micro-economic viewpoint. Our results show that the regime fixed costs play an important role in farmers’ choice to produce or not some crops and they are, in the short term, an important determinant of acreage choices.
179

Essays on Birnbaum-Saunders models

Santos, Helton Saulo Bezerra dos January 2013 (has links)
Nessa tese apresentamos três diferentes aplicações dos modelos Birnbaum-Saunders. No capítulo 2 introduzimos um novo método por função-núcleo não-paramétrico para a estimação de densidades assimétricas, baseado nas distribuições Birnbaum-Saunders generalizadas assimétricas. Funções-núcleo baseadas nessas distribuições têm a vantagem de fornecer flexibilidade nos níveis de assimetria e curtose. Em adição, os estimadores da densidade por função-núcleo Birnbaum-Saunders gene-ralizadas assimétricas são livres de viés na fronteira e alcançam a taxa ótima de convergência para o erro quadrático integrado médio dos estimadores por função-núcleo-assimétricas-não-negativos da densidade. Realizamos uma análise de dados consistindo de duas partes. Primeiro, conduzimos uma simulação de Monte Carlo para avaliar o desempenho do método proposto. Segundo, usamos esse método para estimar a densidade de três dados reais da concentração de poluentes atmosféricos. Os resultados numéricos favorecem os estimadores não-paramétricos propostos. No capítulo 3 propomos uma nova família de modelos autorregressivos de duração condicional baseados nas distribuições misturas de escala Birnbaum-Saunders (SBS). A distribuição Birnbaum-Saunders (BS) é um modelo que tem recebido considerável atenção recentemente devido às suas boas propriedades. Uma extensão dessa distribuição é a classe de distribuições SBS, a qual (i) herda várias das boas propriedades da distribuição BS, (ii) permite a estimação de máxima verossimilhança em uma forma eficiente usando o algoritmo EM, e (iii) possibilita a obtenção de um procedimento de estimação robusta, entre outras propriedades. O modelo autorregressivo de duração condicional é a família primária de modelos para analisar dados de duração de transações de alta frequência. A metodologia estudada aqui inclui estimação dos parâmetros pelo algoritmo EM, inferência para esses parâmetros, modelo preditivo e uma análise residual. Realizamos simulações de Monte Carlo para avaliar o desempenho da metodologia proposta. Ainda, avalia-mos a utilidade prática dessa metodologia usando dados reais de transações financeiras da bolsa de valores de Nova Iorque. O capítulo 4 trata de índices de capacidade do processo (PCIs), os quais são ferramentas utilizadas pelas empresas para determinar a qualidade de um produto e avaliar o desempenho de seus processos de produção. Estes índices foram desenvolvidos para processos cuja característica de qualidade tem uma distribuição normal. Na prática, muitas destas ca-racterísticas não seguem esta distribuição. Nesse caso, os PCIs devem ser modificados considerando a não-normalidade. O uso de PCIs não-modificados podemlevar a resultados inadequados. De maneira a estabelecer políticas de qualidade para resolver essa inadequação, transformação dos dados tem sido proposta, bem como o uso de quantis de distribuições não-normais. Um distribuição não-normal assimétrica o qual tem tornado muito popular em tempos recentes é a distribuição Birnbaum-Saunders (BS). Propomos, desenvolvemos, implementamos e aplicamos uma metodologia baseada em PCIs para a distribuição BS. Além disso, realizamos um estudo de simulação para avaliar o desempenho da metodologia proposta. Essa metodologia foi implementada usando o software estatístico chamado R. Aplicamos essa metodologia para um conjunto de dados reais de maneira a ilustrar a sua flexibilidade e potencialidade. / In this thesis, we present three different applications of Birnbaum-Saunders models. In Chapter 2, we introduce a new nonparametric kernel method for estimating asymmetric densities based on generalized skew-Birnbaum-Saunders distributions. Kernels based on these distributions have the advantage of providing flexibility in the asymmetry and kurtosis levels. In addition, the generalized skew-Birnbaum-Saunders kernel density estimators are boundary bias free and achieve the optimal rate of convergence for the mean integrated squared error of the nonnegative asymmetric kernel density estimators. We carry out a data analysis consisting of two parts. First, we conduct a Monte Carlo simulation study for evaluating the performance of the proposed method. Second, we use this method for estimating the density of three real air pollutant concentration data sets, whose numerical results favor the proposed nonparametric estimators. In Chapter 3, we propose a new family of autoregressive conditional duration models based on scale-mixture Birnbaum-Saunders (SBS) distributions. The Birnbaum-Saunders (BS) distribution is a model that has received considerable attention recently due to its good properties. An extension of this distribution is the class of SBS distributions, which allows (i) several of its good properties to be inherited; (ii) maximum likelihood estimation to be efficiently formulated via the EM algorithm; (iii) a robust estimation procedure to be obtained; among other properties. The autoregressive conditional duration model is the primary family of models to analyze high-frequency financial transaction data. This methodology includes parameter estimation by the EM algorithm, inference for these parameters, the predictive model and a residual analysis. We carry out a Monte Carlo simulation study to evaluate the performance of the proposed methodology. In addition, we assess the practical usefulness of this methodology by using real data of financial transactions from the New York stock exchange. Chapter 4 deals with process capability indices (PCIs), which are tools widely used by companies to determine the quality of a product and the performance of their production processes. These indices were developed for processes whose quality characteristic has a normal distribution. In practice, many of these characteristics do not follow this distribution. In such a case, the PCIs must be modified considering the non-normality. The use of unmodified PCIs can lead to inadequacy results. In order to establish quality policies to solve this inadequacy, data transformation has been proposed, as well as the use of quantiles from non-normal distributions. An asymmetric non-normal distribution which has become very popular in recent times is the Birnbaum-Saunders (BS) distribution. We propose, develop, implement and apply a methodology based on PCIs for the BS distribution. Furthermore, we carry out a simulation study to evaluate the performance of the proposed methodology. This methodology has been implemented in a noncommercial and open source statistical software called R. We apply this methodology to a real data set to illustrate its flexibility and potentiality.
180

Inferência em distribuições discretas bivariadas

Chire, Verônica Amparo Quispe 26 November 2013 (has links)
Made available in DSpace on 2016-06-02T20:06:09Z (GMT). No. of bitstreams: 1 5618.pdf: 988258 bytes, checksum: 1ce6234a919d1f5b4a4d4fd7482d543c (MD5) Previous issue date: 2013-11-26 / Financiadora de Estudos e Projetos / The analysis of bivariate data can be found in several areas of knowledge, when the data of interest are obtained in a paired way and present correlation between counts. In this work the Holgate bivariate Poisson, bivariate generalized Poisson and bivariate zero-inflated Poisson models are presented, which are useful to the modeling of bivariate count data correlated. Illustrative applications are presented for these models and the comparison between them is made by using criteria of model selection AIC and BIC, as well as the asymptotic likelihood ratio test. Particularly, we propose a Bayesian approach to the Holgate bivariate Poisson and bivariate zero-inflated Poisson models, based in the Gibbs sampling algorithm with data augmentation. / A análise de dados bivariados pode ser encontrada nas mais diversas áreas do conhecimento, quando os dados de interesse são obtidos de forma pareada e apresentam correlação entre as contagens. Neste trabalho são apresentados os modelos Poisson bivariado de Holgate, Poisson generalizado bivariado e Poisson bivariado inflacionado de zeros, os quais são úteis na modelagem de dados de contagem bivariados correlacionados. Aplicações ilustrativas serão apresentadas para estes modelos e a comparação entre eles será realizada pelos critérios de seleção de modelos AIC e BIC, assim como pelo teste da razão de verossimilhança assintótico. Particularmente, propomos uma abordagem Bayesiana para os modelos Poisson bivariado de Holgate e Poisson Inflacionado de zeros, baseada no algoritmo Gibbs sampling com dados ampliados.

Page generated in 0.0907 seconds