• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 95
  • 37
  • 26
  • 17
  • 10
  • 8
  • 7
  • 4
  • 3
  • 2
  • 1
  • 1
  • 1
  • 1
  • Tagged with
  • 226
  • 226
  • 73
  • 68
  • 67
  • 51
  • 44
  • 42
  • 39
  • 32
  • 31
  • 29
  • 27
  • 23
  • 22
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
181

Estimação do tamanho populacional a partir de um modelo de captura-recaptura com heterogeneidade

Pezzott, George Lucas Moraes 14 March 2014 (has links)
Made available in DSpace on 2016-06-02T20:06:10Z (GMT). No. of bitstreams: 1 6083.pdf: 1151427 bytes, checksum: 24c39bb02ef8c214a3e10c3cc5bae9ef (MD5) Previous issue date: 2014-03-14 / Financiadora de Estudos e Projetos / In this work, we consider the estimation of the number of errors in a software from a closed population. The process of estimating the population size is based on the capture-recapture method which consists of examining the software, in parallel, by a number of reviewers. The probabilistic model adopted accommodates situations in which reviewers are independent and homogeneous (equally efficient), and each error is an element that is part of a disjoint partition in relation to its detection probability. We propose an iterative process to obtain maximum likelihood estimates in which the EM algorithm is used to the nuisance parameters estimation. The estimates of population parameters were also obtained under the Bayesian approach, in which Monte Carlo on Markov Chains (MCMC) simulations through Gibbs sampling algorithm with insertion of latent variables were used on the conditional posterior distributions. The two approaches were applied to simulated data and in two real data sets from the literature. / Neste trabalho, consideramos a estimação do número de erros em um software provenientes de uma população fechada. O processo de estimação do tamanho populacional é baseado no método de captura-recaptura, que consiste em examinar o software, em paralelo, por certo número de revisores. O modelo probabilístico adotado acomoda situações em que os revisores são independentes e homogêneos (igualmente eficientes) e que cada erro é um elemento que faz parte de uma partição disjunta quanto à sua probabilidade de detecção. Propomos um processo iterativo para obtenção das estimativas de máxima verossimilhança em que utilizamos o algoritmo EM na estimação dos parâmetros perturbadores. As estimativas dos parâmetros populacionais também foram obtidas sob o enfoque Bayesiano, onde utilizamos simulações de Monte Carlo em Cadeias de Markov (MCMC) através do algoritmo Gibbs sampling com a inserção de variáveis latentes nas distribuições condicionais a posteriori. As duas abordagens foram aplicadas em dados simulados e em dois conjuntos de dados reais da literatura.
182

Essays on Birnbaum-Saunders models

Santos, Helton Saulo Bezerra dos January 2013 (has links)
Nessa tese apresentamos três diferentes aplicações dos modelos Birnbaum-Saunders. No capítulo 2 introduzimos um novo método por função-núcleo não-paramétrico para a estimação de densidades assimétricas, baseado nas distribuições Birnbaum-Saunders generalizadas assimétricas. Funções-núcleo baseadas nessas distribuições têm a vantagem de fornecer flexibilidade nos níveis de assimetria e curtose. Em adição, os estimadores da densidade por função-núcleo Birnbaum-Saunders gene-ralizadas assimétricas são livres de viés na fronteira e alcançam a taxa ótima de convergência para o erro quadrático integrado médio dos estimadores por função-núcleo-assimétricas-não-negativos da densidade. Realizamos uma análise de dados consistindo de duas partes. Primeiro, conduzimos uma simulação de Monte Carlo para avaliar o desempenho do método proposto. Segundo, usamos esse método para estimar a densidade de três dados reais da concentração de poluentes atmosféricos. Os resultados numéricos favorecem os estimadores não-paramétricos propostos. No capítulo 3 propomos uma nova família de modelos autorregressivos de duração condicional baseados nas distribuições misturas de escala Birnbaum-Saunders (SBS). A distribuição Birnbaum-Saunders (BS) é um modelo que tem recebido considerável atenção recentemente devido às suas boas propriedades. Uma extensão dessa distribuição é a classe de distribuições SBS, a qual (i) herda várias das boas propriedades da distribuição BS, (ii) permite a estimação de máxima verossimilhança em uma forma eficiente usando o algoritmo EM, e (iii) possibilita a obtenção de um procedimento de estimação robusta, entre outras propriedades. O modelo autorregressivo de duração condicional é a família primária de modelos para analisar dados de duração de transações de alta frequência. A metodologia estudada aqui inclui estimação dos parâmetros pelo algoritmo EM, inferência para esses parâmetros, modelo preditivo e uma análise residual. Realizamos simulações de Monte Carlo para avaliar o desempenho da metodologia proposta. Ainda, avalia-mos a utilidade prática dessa metodologia usando dados reais de transações financeiras da bolsa de valores de Nova Iorque. O capítulo 4 trata de índices de capacidade do processo (PCIs), os quais são ferramentas utilizadas pelas empresas para determinar a qualidade de um produto e avaliar o desempenho de seus processos de produção. Estes índices foram desenvolvidos para processos cuja característica de qualidade tem uma distribuição normal. Na prática, muitas destas ca-racterísticas não seguem esta distribuição. Nesse caso, os PCIs devem ser modificados considerando a não-normalidade. O uso de PCIs não-modificados podemlevar a resultados inadequados. De maneira a estabelecer políticas de qualidade para resolver essa inadequação, transformação dos dados tem sido proposta, bem como o uso de quantis de distribuições não-normais. Um distribuição não-normal assimétrica o qual tem tornado muito popular em tempos recentes é a distribuição Birnbaum-Saunders (BS). Propomos, desenvolvemos, implementamos e aplicamos uma metodologia baseada em PCIs para a distribuição BS. Além disso, realizamos um estudo de simulação para avaliar o desempenho da metodologia proposta. Essa metodologia foi implementada usando o software estatístico chamado R. Aplicamos essa metodologia para um conjunto de dados reais de maneira a ilustrar a sua flexibilidade e potencialidade. / In this thesis, we present three different applications of Birnbaum-Saunders models. In Chapter 2, we introduce a new nonparametric kernel method for estimating asymmetric densities based on generalized skew-Birnbaum-Saunders distributions. Kernels based on these distributions have the advantage of providing flexibility in the asymmetry and kurtosis levels. In addition, the generalized skew-Birnbaum-Saunders kernel density estimators are boundary bias free and achieve the optimal rate of convergence for the mean integrated squared error of the nonnegative asymmetric kernel density estimators. We carry out a data analysis consisting of two parts. First, we conduct a Monte Carlo simulation study for evaluating the performance of the proposed method. Second, we use this method for estimating the density of three real air pollutant concentration data sets, whose numerical results favor the proposed nonparametric estimators. In Chapter 3, we propose a new family of autoregressive conditional duration models based on scale-mixture Birnbaum-Saunders (SBS) distributions. The Birnbaum-Saunders (BS) distribution is a model that has received considerable attention recently due to its good properties. An extension of this distribution is the class of SBS distributions, which allows (i) several of its good properties to be inherited; (ii) maximum likelihood estimation to be efficiently formulated via the EM algorithm; (iii) a robust estimation procedure to be obtained; among other properties. The autoregressive conditional duration model is the primary family of models to analyze high-frequency financial transaction data. This methodology includes parameter estimation by the EM algorithm, inference for these parameters, the predictive model and a residual analysis. We carry out a Monte Carlo simulation study to evaluate the performance of the proposed methodology. In addition, we assess the practical usefulness of this methodology by using real data of financial transactions from the New York stock exchange. Chapter 4 deals with process capability indices (PCIs), which are tools widely used by companies to determine the quality of a product and the performance of their production processes. These indices were developed for processes whose quality characteristic has a normal distribution. In practice, many of these characteristics do not follow this distribution. In such a case, the PCIs must be modified considering the non-normality. The use of unmodified PCIs can lead to inadequacy results. In order to establish quality policies to solve this inadequacy, data transformation has been proposed, as well as the use of quantiles from non-normal distributions. An asymmetric non-normal distribution which has become very popular in recent times is the Birnbaum-Saunders (BS) distribution. We propose, develop, implement and apply a methodology based on PCIs for the BS distribution. Furthermore, we carry out a simulation study to evaluate the performance of the proposed methodology. This methodology has been implemented in a noncommercial and open source statistical software called R. We apply this methodology to a real data set to illustrate its flexibility and potentiality.
183

Essays on Birnbaum-Saunders models

Santos, Helton Saulo Bezerra dos January 2013 (has links)
Nessa tese apresentamos três diferentes aplicações dos modelos Birnbaum-Saunders. No capítulo 2 introduzimos um novo método por função-núcleo não-paramétrico para a estimação de densidades assimétricas, baseado nas distribuições Birnbaum-Saunders generalizadas assimétricas. Funções-núcleo baseadas nessas distribuições têm a vantagem de fornecer flexibilidade nos níveis de assimetria e curtose. Em adição, os estimadores da densidade por função-núcleo Birnbaum-Saunders gene-ralizadas assimétricas são livres de viés na fronteira e alcançam a taxa ótima de convergência para o erro quadrático integrado médio dos estimadores por função-núcleo-assimétricas-não-negativos da densidade. Realizamos uma análise de dados consistindo de duas partes. Primeiro, conduzimos uma simulação de Monte Carlo para avaliar o desempenho do método proposto. Segundo, usamos esse método para estimar a densidade de três dados reais da concentração de poluentes atmosféricos. Os resultados numéricos favorecem os estimadores não-paramétricos propostos. No capítulo 3 propomos uma nova família de modelos autorregressivos de duração condicional baseados nas distribuições misturas de escala Birnbaum-Saunders (SBS). A distribuição Birnbaum-Saunders (BS) é um modelo que tem recebido considerável atenção recentemente devido às suas boas propriedades. Uma extensão dessa distribuição é a classe de distribuições SBS, a qual (i) herda várias das boas propriedades da distribuição BS, (ii) permite a estimação de máxima verossimilhança em uma forma eficiente usando o algoritmo EM, e (iii) possibilita a obtenção de um procedimento de estimação robusta, entre outras propriedades. O modelo autorregressivo de duração condicional é a família primária de modelos para analisar dados de duração de transações de alta frequência. A metodologia estudada aqui inclui estimação dos parâmetros pelo algoritmo EM, inferência para esses parâmetros, modelo preditivo e uma análise residual. Realizamos simulações de Monte Carlo para avaliar o desempenho da metodologia proposta. Ainda, avalia-mos a utilidade prática dessa metodologia usando dados reais de transações financeiras da bolsa de valores de Nova Iorque. O capítulo 4 trata de índices de capacidade do processo (PCIs), os quais são ferramentas utilizadas pelas empresas para determinar a qualidade de um produto e avaliar o desempenho de seus processos de produção. Estes índices foram desenvolvidos para processos cuja característica de qualidade tem uma distribuição normal. Na prática, muitas destas ca-racterísticas não seguem esta distribuição. Nesse caso, os PCIs devem ser modificados considerando a não-normalidade. O uso de PCIs não-modificados podemlevar a resultados inadequados. De maneira a estabelecer políticas de qualidade para resolver essa inadequação, transformação dos dados tem sido proposta, bem como o uso de quantis de distribuições não-normais. Um distribuição não-normal assimétrica o qual tem tornado muito popular em tempos recentes é a distribuição Birnbaum-Saunders (BS). Propomos, desenvolvemos, implementamos e aplicamos uma metodologia baseada em PCIs para a distribuição BS. Além disso, realizamos um estudo de simulação para avaliar o desempenho da metodologia proposta. Essa metodologia foi implementada usando o software estatístico chamado R. Aplicamos essa metodologia para um conjunto de dados reais de maneira a ilustrar a sua flexibilidade e potencialidade. / In this thesis, we present three different applications of Birnbaum-Saunders models. In Chapter 2, we introduce a new nonparametric kernel method for estimating asymmetric densities based on generalized skew-Birnbaum-Saunders distributions. Kernels based on these distributions have the advantage of providing flexibility in the asymmetry and kurtosis levels. In addition, the generalized skew-Birnbaum-Saunders kernel density estimators are boundary bias free and achieve the optimal rate of convergence for the mean integrated squared error of the nonnegative asymmetric kernel density estimators. We carry out a data analysis consisting of two parts. First, we conduct a Monte Carlo simulation study for evaluating the performance of the proposed method. Second, we use this method for estimating the density of three real air pollutant concentration data sets, whose numerical results favor the proposed nonparametric estimators. In Chapter 3, we propose a new family of autoregressive conditional duration models based on scale-mixture Birnbaum-Saunders (SBS) distributions. The Birnbaum-Saunders (BS) distribution is a model that has received considerable attention recently due to its good properties. An extension of this distribution is the class of SBS distributions, which allows (i) several of its good properties to be inherited; (ii) maximum likelihood estimation to be efficiently formulated via the EM algorithm; (iii) a robust estimation procedure to be obtained; among other properties. The autoregressive conditional duration model is the primary family of models to analyze high-frequency financial transaction data. This methodology includes parameter estimation by the EM algorithm, inference for these parameters, the predictive model and a residual analysis. We carry out a Monte Carlo simulation study to evaluate the performance of the proposed methodology. In addition, we assess the practical usefulness of this methodology by using real data of financial transactions from the New York stock exchange. Chapter 4 deals with process capability indices (PCIs), which are tools widely used by companies to determine the quality of a product and the performance of their production processes. These indices were developed for processes whose quality characteristic has a normal distribution. In practice, many of these characteristics do not follow this distribution. In such a case, the PCIs must be modified considering the non-normality. The use of unmodified PCIs can lead to inadequacy results. In order to establish quality policies to solve this inadequacy, data transformation has been proposed, as well as the use of quantiles from non-normal distributions. An asymmetric non-normal distribution which has become very popular in recent times is the Birnbaum-Saunders (BS) distribution. We propose, develop, implement and apply a methodology based on PCIs for the BS distribution. Furthermore, we carry out a simulation study to evaluate the performance of the proposed methodology. This methodology has been implemented in a noncommercial and open source statistical software called R. We apply this methodology to a real data set to illustrate its flexibility and potentiality.
184

Estimação via EM e diagnóstico em modelos misturas assimétricas com regressão

Louredo, Graciliano Márcio Santos 26 February 2018 (has links)
Submitted by Geandra Rodrigues (geandrar@gmail.com) on 2018-04-10T15:11:39Z No. of bitstreams: 1 gracilianomarciosantoslouredo.pdf: 1813142 bytes, checksum: b79d02006212c4f63d6836c9a417d4bc (MD5) / Approved for entry into archive by Adriana Oliveira (adriana.oliveira@ufjf.edu.br) on 2018-04-11T15:25:36Z (GMT) No. of bitstreams: 1 gracilianomarciosantoslouredo.pdf: 1813142 bytes, checksum: b79d02006212c4f63d6836c9a417d4bc (MD5) / Made available in DSpace on 2018-04-11T15:25:36Z (GMT). No. of bitstreams: 1 gracilianomarciosantoslouredo.pdf: 1813142 bytes, checksum: b79d02006212c4f63d6836c9a417d4bc (MD5) Previous issue date: 2018-02-26 / FAPEMIG - Fundação de Amparo à Pesquisa do Estado de Minas Gerais / O objetivo deste trabalho é apresentar algumas contribuições para a melhoria do processo de estimação por máxima verossimilhança via algoritmo EM em modelos misturas assimétricas com regressão, além de realizar neles a análise de influência local e global. Essas contribuições, em geral de natureza computacional, visam à resolução de problemas comuns na modelagem estatística de maneira mais eficiente. Dentre elas está a substituição de métodos utilizados nas versões dos algoritmos GEM por outras que reduzem o problema aproximadamente a um algoritmo EM clássico nos principais exemplos das distribuições misturas de escala assimétricas de normais. Após a execução do processo de estimação, discutiremos ainda as principais técnicas existentes para o diagnóstico de pontos influentes com as adaptações necessárias aos modelos em foco. Desejamos com tal abordagem acrescentar ao tratamento dessa classe de modelos estatísticos a análise de regressão nas distribuições mais recentes na literatura. Também esperamos abrir caminho para o uso de técnicas similares em outras classes de modelos. / The objective of this work is to present some contributions to improvement the process of maximum likelihood estimation via the EM algorithm in skew mixtures models with regression, as well as to execute in them the global and local influence analysis. These contributions, usually with computational nature, aim to solving common problems in statistical modeling more efficiently. Among them is the replacement of used methods in the versions of the GEM algorithm by other techniques that reduce the problem approximately to a classic EM algorithm in the main examples of skew scale mixtures of normals distributions. After performing the estimation process, we will also discuss the main existing techniques for the diagnosis of influential points with the necessaries adaptations to the models in focus. We wish with this approach to add for the treatment of this statistical model class the regression analysis in the most recent distributions in the literature. We too hope to paving the way for use of similar techniques in other models classes.
185

Détection et classification de signatures temporelles CAN pour l’aide à la maintenance de sous-systèmes d’un véhicule de transport collectif / Detection and classification of temporal CAN signatures to support maintenance of public transportation vehicle subsystems

Cheifetz, Nicolas 09 September 2013 (has links)
Le problème étudié dans le cadre de cette thèse porte essentiellement sur l'étape de détection de défaut dans un processus de diagnostic industriel. Ces travaux sont motivés par la surveillance de deux sous-systèmes complexes d'un autobus impactant la disponibilité des véhicules et leurs coûts de maintenance : le système de freinage et celui des portes. Cette thèse décrit plusieurs outils dédiés au suivi de fonctionnement de ces deux systèmes. On choisit une approche de diagnostic par reconnaissance des formes qui s'appuie sur l'analyse de données collectées en exploitation à partir d'une nouvelle architecture télématique embarquée dans les autobus. Les méthodes proposées dans ces travaux de thèse permettent de détecter un changement structurel dans un flux de données traité séquentiellement, et intègrent des connaissances disponibles sur les systèmes surveillés. Le détecteur appliqué aux freins s'appuie sur les variables de sortie (liées au freinage) d'un modèle physique dynamique du véhicule qui est validé expérimentalement dans le cadre de nos travaux. L'étape de détection est ensuite réalisée par des cartes de contrôle multivariées à partir de données multidimensionnelles. La stratégie de détection pour l'étude du système porte traite directement les données collectées par des capteurs embarqués pendant des cycles d'ouverture et de fermeture, sans modèle physique a priori. On propose un test séquentiel à base d'hypothèses alimenté par un modèle génératif pour représenter les données fonctionnelles. Ce modèle de régression permet de segmenter des courbes multidimensionnelles en plusieurs régimes. Les paramètres de ce modèle sont estimés par un algorithme de type EM dans un mode semi-supervisé. Les résultats obtenus à partir de données réelles et simulées ont permis de mettre en évidence l'efficacité des méthodes proposées aussi bien pour l'étude des freins que celle des portes / This thesis is mainly dedicated to the fault detection step occurring in a process of industrial diagnosis. This work is motivated by the monitoring of two complex subsystems of a transit bus, which impact the availability of vehicles and their maintenance costs: the brake and the door systems. This thesis describes several tools that monitor operating actions of these systems. We choose a pattern recognition approach based on the analysis of data collected from a new IT architecture on-board the buses. The proposed methods allow to detect sequentially a structural change in a datastream, and take advantage of prior knowledge of the monitored systems. The detector applied to the brakes is based on the output variables (related to the brake system) from a physical dynamic modeling of the vehicle which is experimentally validated in this work. The detection step is then performed by multivariate control charts from multidimensional data. The detection strategy dedicated to doors deals with data collected by embedded sensors during opening and closing cycles, with no need for a physical model. We propose a sequential testing approach using a generative model to describe the functional data. This regression model allows to segment multidimensional curves in several regimes. The model parameters are estimated via a specific EM algorithm in a semi-supervised mode. The results obtained from simulated and real data allow to highlight the effectiveness of the proposed methods on both the study of brakes and doors
186

Essays in dynamic macroeconometrics

Bañbura, Marta 26 June 2009 (has links)
The thesis contains four essays covering topics in the field of macroeconomic forecasting.<p><p>The first two chapters consider factor models in the context of real-time forecasting with many indicators. Using a large number of predictors offers an opportunity to exploit a rich information set and is also considered to be a more robust approach in the presence of instabilities. On the other hand, it poses a challenge of how to extract the relevant information in a parsimonious way. Recent research shows that factor models provide an answer to this problem. The fundamental assumption underlying those models is that most of the co-movement of the variables in a given dataset can be summarized by only few latent variables, the factors. This assumption seems to be warranted in the case of macroeconomic and financial data. Important theoretical foundations for large factor models were laid by Forni, Hallin, Lippi and Reichlin (2000) and Stock and Watson (2002). Since then, different versions of factor models have been applied for forecasting, structural analysis or construction of economic activity indicators. Recently, Giannone, Reichlin and Small (2008) have used a factor model to produce projections of the U.S GDP in the presence of a real-time data flow. They propose a framework that can cope with large datasets characterised by staggered and nonsynchronous data releases (sometimes referred to as “ragged edge”). This is relevant as, in practice, important indicators like GDP are released with a substantial delay and, in the meantime, more timely variables can be used to assess the current state of the economy.<p><p>The first chapter of the thesis entitled “A look into the factor model black box: publication lags and the role of hard and soft data in forecasting GDP” is based on joint work with Gerhard Rünstler and applies the framework of Giannone, Reichlin and Small (2008) to the case of euro area. In particular, we are interested in the role of “soft” and “hard” data in the GDP forecast and how it is related to their timeliness.<p>The soft data include surveys and financial indicators and reflect market expectations. They are usually promptly available. In contrast, the hard indicators on real activity measure directly certain components of GDP (e.g. industrial production) and are published with a significant delay. We propose several measures in order to assess the role of individual or groups of series in the forecast while taking into account their respective publication lags. We find that surveys and financial data contain important information beyond the monthly real activity measures for the GDP forecasts, once their timeliness is properly accounted for.<p><p>The second chapter entitled “Maximum likelihood estimation of large factor model on datasets with arbitrary pattern of missing data” is based on joint work with Michele Modugno. It proposes a methodology for the estimation of factor models on large cross-sections with a general pattern of missing data. In contrast to Giannone, Reichlin and Small (2008), we can handle datasets that are not only characterised by a “ragged edge”, but can include e.g. mixed frequency or short history indicators. The latter is particularly relevant for the euro area or other young economies, for which many series have been compiled only since recently. We adopt the maximum likelihood approach which, apart from the flexibility with regard to the pattern of missing data, is also more efficient and allows imposing restrictions on the parameters. Applied for small factor models by e.g. Geweke (1977), Sargent and Sims (1977) or Watson and Engle (1983), it has been shown by Doz, Giannone and Reichlin (2006) to be consistent, robust and computationally feasible also in the case of large cross-sections. To circumvent the computational complexity of a direct likelihood maximisation in the case of large cross-section, Doz, Giannone and Reichlin (2006) propose to use the iterative Expectation-Maximisation (EM) algorithm (used for the small model by Watson and Engle, 1983). Our contribution is to modify the EM steps to the case of missing data and to show how to augment the model, in order to account for the serial correlation of the idiosyncratic component. In addition, we derive the link between the unexpected part of a data release and the forecast revision and illustrate how this can be used to understand the sources of the<p>latter in the case of simultaneous releases. We use this methodology for short-term forecasting and backdating of the euro area GDP on the basis of a large panel of monthly and quarterly data. In particular, we are able to examine the effect of quarterly variables and short history monthly series like the Purchasing Managers' surveys on the forecast.<p><p>The third chapter is entitled “Large Bayesian VARs” and is based on joint work with Domenico Giannone and Lucrezia Reichlin. It proposes an alternative approach to factor models for dealing with the curse of dimensionality, namely Bayesian shrinkage. We study Vector Autoregressions (VARs) which have the advantage over factor models in that they allow structural analysis in a natural way. We consider systems including more than 100 variables. This is the first application in the literature to estimate a VAR of this size. Apart from the forecast considerations, as argued above, the size of the information set can be also relevant for the structural analysis, see e.g. Bernanke, Boivin and Eliasz (2005), Giannone and Reichlin (2006) or Christiano, Eichenbaum and Evans (1999) for a discussion. In addition, many problems may require the study of the dynamics of many variables: many countries, sectors or regions. While we use standard priors as proposed by Litterman (1986), an<p>important novelty of the work is that we set the overall tightness of the prior in relation to the model size. In this we follow the recommendation by De Mol, Giannone and Reichlin (2008) who study the case of Bayesian regressions. They show that with increasing size of the model one should shrink more to avoid overfitting, but when data are collinear one is still able to extract the relevant sample information. We apply this principle in the case of VARs. We compare the large model with smaller systems in terms of forecasting performance and structural analysis of the effect of monetary policy shock. The results show that a standard Bayesian VAR model is an appropriate tool for large panels of data once the degree of shrinkage is set in relation to the model size. <p><p>The fourth chapter entitled “Forecasting euro area inflation with wavelets: extracting information from real activity and money at different scales” proposes a framework for exploiting relationships between variables at different frequency bands in the context of forecasting. This work is motivated by the on-going debate whether money provides a reliable signal for the future price developments. The empirical evidence on the leading role of money for inflation in an out-of-sample forecast framework is not very strong, see e.g. Lenza (2006) or Fisher, Lenza, Pill and Reichlin (2008). At the same time, e.g. Gerlach (2003) or Assenmacher-Wesche and Gerlach (2007, 2008) argue that money and output could affect prices at different frequencies, however their analysis is performed in-sample. In this Chapter, it is investigated empirically which frequency bands and for which variables are the most relevant for the out-of-sample forecast of inflation when the information from prices, money and real activity is considered. To extract different frequency components from a series a wavelet transform is applied. It provides a simple and intuitive framework for band-pass filtering and allows a decomposition of series into different frequency bands. Its application in the multivariate out-of-sample forecast is novel in the literature. The results indicate that, indeed, different scales of money, prices and GDP can be relevant for the inflation forecast.<p> / Doctorat en Sciences économiques et de gestion / info:eu-repo/semantics/nonPublished
187

Modely pro data s nadbytečnými nulami / Models for zero-inflated data

Matula, Dominik January 2016 (has links)
The aim of this thesis is to provide a comprehensive overview of the main approaches to modeling data loaded with redundant zeros. There are three main subclasses of zero modified models (ZMM) described here - zero inflated models (the main focus lies on models of this subclass), zero truncated models and hurdle models. Models of each subclass are defined and then a construction of maximum likelihood estimates of regression coefficients is described. ZMM models are mostly based on Poisson or negative binomial type 2 distribution (NB2). In this work, author has extended the theory to ZIM models generally based on any discrete distributions of exponential type. There is described a construction of MLE of regression coefficients of theese models, too. Just few of present works are interested in ZIM models based on negative binomial type 1 distribution (NB1). This distribution is not of exponential type therefore a common method of MLE construction in ZIM models cannot be used here. In this work provides modification of this method using quasi-likelihood method. There are two simulation studies concluding the work. 1
188

Sound source localization with data and model uncertainties using the EM and Evidential EM algorithms / Estimation de sources acoustiques avec prise en compte de l'incertitude de propagation

Wang, Xun 09 December 2014 (has links)
Ce travail de thèse se penche sur le problème de la localisation de sources acoustiques à partir de signaux déterministes et aléatoires mesurés par un réseau de microphones. Le problème est résolu dans un cadre statistique, par estimation via la méthode du maximum de vraisemblance. La pression mesurée par un microphone est interprétée comme étant un mélange de signaux latents émis par les sources. Les positions et les amplitudes des sources acoustiques sont estimées en utilisant l’algorithme espérance-maximisation (EM). Dans cette thèse, deux types d’incertitude sont également pris en compte : les positions des microphones et le nombre d’onde sont supposés mal connus. Ces incertitudes sont transposées aux données dans le cadre théorique des fonctions de croyance. Ensuite, les positions et les amplitudes des sources acoustiques peuvent être estimées en utilisant l’algorithme E2M, qui est une variante de l’algorithme EM pour les données incertaines.La première partie des travaux considère le modèle de signal déterministe sans prise en compte de l’incertitude. L’algorithme EM est utilisé pour estimer les positions et les amplitudes des sources. En outre, les résultats expérimentaux sont présentés et comparés avec le beamforming et la holographie optimisée statistiquement en champ proche (SONAH), ce qui démontre l’avantage de l’algorithme EM. La deuxième partie considère le problème de l’incertitude du modèle et montre comment les incertitudes sur les positions des microphones et le nombre d’onde peuvent être quantifiées sur les données. Dans ce cas, la fonction de vraisemblance est étendue aux données incertaines. Ensuite, l’algorithme E2M est utilisé pour estimer les sources acoustiques. Finalement, les expériences réalisées sur les données réelles et simulées montrent que les algorithmes EM et E2M donnent des résultats similaires lorsque les données sont certaines, mais que ce dernier est plus robuste en présence d’incertitudes sur les paramètres du modèle. La troisième partie des travaux présente le cas de signaux aléatoires, dont l’amplitude est considérée comme une variable aléatoire gaussienne. Dans le modèle sans incertitude, l’algorithme EM est utilisé pour estimer les sources acoustiques. Dans le modèle incertain, les incertitudes sur les positions des microphones et le nombre d’onde sont transposées aux données comme dans la deuxième partie. Enfin, les positions et les variances des amplitudes aléatoires des sources acoustiques sont estimées en utilisant l’algorithme E2M. Les résultats montrent ici encore l’avantage d’utiliser un modèle statistique pour estimer les sources en présence, et l’intérêt de prendre en compte l’incertitude sur les paramètres du modèle. / This work addresses the problem of multiple sound source localization for both deterministic and random signals measured by an array of microphones. The problem is solved in a statistical framework via maximum likelihood. The pressure measured by a microphone is interpreted as a mixture of latent signals emitted by the sources; then, both the sound source locations and strengths can be estimated using an expectation-maximization (EM) algorithm. In this thesis, two kinds of uncertainties are also considered: on the microphone locations and on the wave number. These uncertainties are transposed to the data in the belief functions framework. Then, the source locations and strengths can be estimated using a variant of the EM algorithm, known as Evidential EM (E2M) algorithm. The first part of this work begins with the deterministic signal model without consideration of uncertainty. The EM algorithm is then used to estimate the source locations and strengths : the update equations for the model parameters are provided. Furthermore, experimental results are presented and compared with the beamforming and the statistically optimized near-field holography (SONAH), which demonstrates the advantage of the EM algorithm. The second part raises the issue of model uncertainty and shows how the uncertainties on microphone locations and wave number can be taken into account at the data level. In this case, the notion of the likelihood is extended to the uncertain data. Then, the E2M algorithm is used to solve the sound source estimation problem. In both the simulation and real experiment, the E2M algorithm proves to be more robust in the presence of model and data uncertainty. The third part of this work considers the case of random signals, in which the amplitude is modeled by a Gaussian random variable. Both the certain and uncertain cases are investigated. In the former case, the EM algorithm is employed to estimate the sound sources. In the latter case, microphone location and wave number uncertainties are quantified similarly to the second part of the thesis. Finally, the source locations and the variance of the random amplitudes are estimated using the E2M algorithm.
189

Prise en compte de l’hétérogénéité inobservée des exploitations agricoles dans la modélisation du changement structurel : illustration dans le cas de la France. / Agricultural policy; Expectation-Maximisation (EM) algorithm; farms; Markovian process; mixture models; spatial interdependence; structural change; unobserved heterogeneity

Saint-Cyr, Legrand Dunold Fils 12 December 2016 (has links)
Le changement structurel en agriculture suscite beaucoup d’intérêt de la part des économistes agricoles ainsi que des décideurs politiques. Pour prendre en compte l’hétérogénéité du comportement des agriculteurs, une approche par les modèles de mélange de chaînes de Markov est appliquée pour la première fois en économie agricole pour analyser ce processus. La performance de cette approche est d’abord testée en utilisant une forme simplifiée du modèle, puis sa forme générale est appliquée pour étudier l’impact de certaines mesures de politique agricole. Pour identifier les principaux canaux d’interdépendance entre exploitations voisines dans les processus du changement structurel, une approche de mélange non-Markovienne a été appliquée pour modéliser la survie et l’agrandissement des exploitations agricolesTrois principales conclusions découlent de cette thèse. Tout d’abord, la prise en compte de l’hétérogénéité dans les processus de transition des exploitations agricoles permet de mieux représenter le changement structurel et conduit à des prédictions plus précises de la distribution des exploitations, comparé aux modèles généralement utilisés jusqu’ici. Deuxièmement, l’impact des principaux facteurs du changement structurel dépend lui aussi des types non-observables d’exploitations mis en évidence. Enfin, le cadre du modèle de mélange permet également de révéler différents types de relations inobservées entre exploitations voisines qui contribuent au changement structurel observé à un niveau global ou régional. / Structural change in farming has long been the subject of considerable interest among agricultural economists and policy makers. To account for heterogeneity in farmers’ behaviours, a mixture Markov modelling framework is applied to analyse this process for the first time in agricultural economics. The performance of this approach is first investigated using a restrictive form of the model, and its general form is then applied to study the impact of some drivers of structural change, including agricultural policy measures. To identify channels through which interdependency between neighbouring farms arises in this process, the mixture modelling approach is applied to analyse both farm survival and farm growth. The main conclusions of this thesis are threefoldFirstly, accounting for the generally unobserved heterogeneity in the transition process of farms allows better representing structural change in farming and leads to more accurate predictions of farm-size distributions than the models usually used so far. Secondly, the impacts of the main drivers of structural change themselves depend on the specific unobservable farm types which are revealed by the model. Lastly, the mixture modelling approach enables identifying different unobserved relationships between neighbouring farms that contributes to the structural change observed at an aggregate or regional level.
190

Expectation-Maximization (EM) Algorithm Based Kalman Smoother For ERD/ERS Brain-Computer Interface (BCI)

Khan, Md. Emtiyaz 06 1900 (has links) (PDF)
No description available.

Page generated in 0.0785 seconds