• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 152
  • 86
  • 54
  • 21
  • 10
  • 7
  • 4
  • 4
  • 4
  • 3
  • 2
  • 2
  • Tagged with
  • 412
  • 181
  • 87
  • 86
  • 78
  • 78
  • 77
  • 70
  • 65
  • 58
  • 57
  • 56
  • 48
  • 43
  • 42
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
21

Class Enumeration and Parameter Bias in Growth Mixture Models with Misspecified Time-Varying Covariates: A Monte Carlo Simulation Study

Palka, Jayme M. 12 1900 (has links)
Growth mixture modeling (GMM) is a useful tool for examining both between- and within-persons change over time and uncovering unobserved heterogeneity in growth trajectories. Importantly, the correct extraction of latent classes and parameter recovery can be dependent upon the type of covariates used. Time-varying covariates (TVCs) can influence class membership but are scarcely included in GMMs as predictors. Other times, TVCs are incorrectly modeled as time-invariant covariates (TICs). Additionally, problematic results can occur with the use of maximum likelihood (ML) estimation in GMMs, including convergence issues and sub-optimal maxima. In such cases, Bayesian estimation may prove to be a useful solution. The present Monte Carlo simulation study aimed to assess class enumeration accuracy and parameter recovery of GMMs with a TVC, particularly when a TVC has been incorrectly specified as a TIC. Both ML estimation and Bayesian estimation were examined. Results indicated that class enumeration indices perform less favorably in the case of TVC misspecification, particularly absolute class enumeration indices. Additionally, in the case of TVC misspecification, parameter bias was found to be greater than the generally accepted cutoff of 10%, particularly for variance estimates. It is recommended that researchers continue to use a variety of class enumeration indices during class enumeration, particularly relative indices. Additionally, researchers should take caution when interpreting variance parameter estimates when the GMM contains a misspecified TVC.
22

Modeling Transition Probabilities for Loan States Using a Bayesian Hierarchical Model

Monson, Rebecca Lee 30 November 2007 (has links) (PDF)
A Markov Chain model can be used to model loan defaults because loans move through delinquency states as the borrower fails to make monthly payments. The transition matrix contains in each location a probability that a borrower in a given state one month moves to the possible delinquency states the next month. In order to use this model, it is necessary to know the transition probabilities, which are unknown quantities. A Bayesian hierarchical model is postulated because there may not be sufficient data for some rare transition probabilities. Using a hierarchical model, similarities between types or families of loans can be taken advantage of to improve estimation, especially for those probabilities with little associated data. The transition probabilities are estimated using MCMC and the Metropolis-Hastings algorithm.
23

Generalized Polynomial Chaos and Markov Chain Monte Carlo Methods for Nonlinear Filtering

Cai, Sheng 15 August 2014 (has links)
In science and engineering research, filtering or estimation of system’s states is widely used and developed, so the structure of a new nonlinear filter is proposed. This thesis focuses on the procedures of propagation and update step of the new filter. The algorithms used in the filter, including generalized Polynomial Chaos Algorithms, Markov Chain Monte Carlo algorithms, and Gaussian Mixture Model algorithms, are introduced. Then, the propagation and update step of the proposed filter are applied in solving two nonlinear problems: Van der Pol Oscillator and Two Body System. The simulation shows that the results of the propagation and update step are reasonable and their designs are valuable for further tests. The propagation step has the same accuracy level compared with a Quasi Monte Carlo simulation while using a much smaller number of points. The update step can build a useful Gaussian Mixture Model as the posterior distribution.
24

Regression Model Stochastic Search via Local Orthogonalization

Xu, Ruoxi 16 December 2011 (has links)
No description available.
25

Ecosystem Models in a Bayesian State Space Framework

Smith Jr, John William 17 June 2022 (has links)
Bayesian approaches are increasingly being used to embed mechanistic process models used into statistical state space frameworks for environmental prediction and forecasting applications. In this study, I focus on Bayesian State Space Models (SSMs) for modeling the temporal dynamics of carbon in terrestrial ecosystems. In Chapter 1, I provide an introduction to Ecological Forecasting, State Space Models, and the challenges of using State Space Models for Ecosystems. In Chapter 2, we provide a brief background on State Space Models and common methods of parameter estimation. In Chapter 3, we simulate data from an example model (DALECev) using driver data from the Talladega National Ecosystem Observatory Network (NEON) site and perform a simulation study to investigate its performance under varying frequencies of observation data. We show that as observation frequency decreases, the effective sample size of our precision estimates becomes too small to reliably make inference. We introduce a method of tuning the time resolution of the latent process, so that we can still use high-frequency flux data, and show that this helps to increase sampling efficiency of the precision parameters. Finally, we show that data cloning is a suitable method for assessing the identifiability of parameters in ecosystem models. In Chapter 4, we introduce a method for embedding positive process models into lognormal SSMs. Our approach, based off of moment matching, allows practitioners to embed process models with arbitrary variance structures into lognormally distributed stochastic process and observation components of a state space model. We compare and contrast the interpretations of our lognormal models to two existing approaches, the Gompertz and Moran-Ricker SSMs. We use our method to create four state space models based off the Gompertz and Moran-Ricker process models, two with a density dependent variance structure for the process and observations and two with a constant variance structure for the process and observations. We design and conduct a simulation study to compare the forecast performance of our four models to their counterparts under model mis-specification. We find that when the observation precision is estimated, the Gompertz model and its density dependent moment matching counterpart have the best forecasting performance under model mis-specification when measured by the average Ignorance score (IGN) and Continuous Ranked Probability Score (CRPS), even performing better than the true generating model across thirty different synthetic datasets. When observation precisions were fixed, all models except for the Gompertz displayed a significant improvement in forecasting performance for IGN, CRPS, or both. Our method was then tested on data from the NOAA Dengue Forecasting Challenge, where we found that our novel constant variance lognormal models had the best performance measured by CRPS, and also had the best performance for both CRPS and IGN for one and two week forecast horizons. This shows the importance of having a flexible method to embed sensible dynamics, as constant variance lognormal SSMs are not frequently used but perform better than the density dependent models here. In Chapter 5, we apply our lognormal moment matching method to embed the DALEC2 ecosystem model into the process component of a state space model using NEON data from University of Notre Dame Environmental Research Center (UNDE). Two different fitting methods are considered for our difficult problem: the updated Iterated Filtering algorithm (IF2) and the Particle Marginal Metropolis Hastings (PMMH) algorithm. We find that the IF2 algorithm is a more efficient algorithm than PMMH for our problem. Our IF2 global search finds candidate parameter values in thirty hours, while PMMH takes 82 hours and accepts only .12% of proposed samples. The parameter values obtained from our IF2 global search show good potential for out of sample prediction for Leaf Area Index and Net Ecosystem Exchange, although both have room for improvement in future work. Overall, the work done here helps to inform the application of state space models to ecological forecasting applications where data are not available for all stocks and transfers at the operational timestep for the ecosystem model, where large numbers of process parameters and long time series provide computational challenges, and where process uncertainty estimation is desired. / Doctor of Philosophy / With ecosystem carbon uptake expected to play a large role in climate change projections, it is important that we make our forecasts as informed as possible and account for as many sources of variation as we can. In this dissertation, we examine a statistical modeling framework called the State Space Model (SSM), and apply it to models of terrestrial ecosystem carbon. The SSM helps to capture numerous sources of variability that can contribute to the overall predictability of a physical process. We discuss challenges of using this framework for ecosystem models, and provide solutions to a number of problems that may arise when using SSMs. We develop methodology for ensuring that these models mimic the well defined upper and lower bounds of the physical processes that we are interested in. We use both real and synthetic data to test that our methods perform as desired, and provide key insights about their performance.
26

Parallel magnetic resonance imaging reconstruction problems using wavelet representations / Problèmes de reconstruction en imagerie par résonance magnétique parallèle à l'aide de représentations en ondelettes

Chaari, Lotfi 05 November 2010 (has links)
Pour réduire le temps d'acquisition ou bien améliorer la résolution spatio-temporelle dans certaines application en IRM, de puissantes techniques parallèles utilisant plusieurs antennes réceptrices sont apparues depuis les années 90. Dans ce contexte, les images d'IRM doivent être reconstruites à partir des données sous-échantillonnées acquises dans le « k-space ». Plusieurs approches de reconstruction ont donc été proposées dont la méthode SENSitivity Encoding (SENSE). Cependant, les images reconstruites sont souvent entâchées par des artéfacts dus au bruit affectant les données observées, ou bien à des erreurs d'estimation des profils de sensibilité des antennes. Dans ce travail, nous présentons de nouvelles méthodes de reconstruction basées sur l'algorithme SENSE, qui introduisent une régularisation dans le domaine transformé en ondelettes afin de promouvoir la parcimonie de la solution. Sous des conditions expérimentales dégradées, ces méthodes donnent une bonne qualité de reconstruction contrairement à la méthode SENSE et aux autres techniques de régularisation classique (e.g. Tikhonov). Les méthodes proposées reposent sur des algorithmes parallèles d'optimisation permettant de traiter des critères convexes, mais non nécessairement différentiables contenant des a priori parcimonieux. Contrairement à la plupart des méthodes de reconstruction qui opèrent coupe par coupe, l'une des méthodes proposées permet une reconstruction 4D (3D + temps) en exploitant les corrélations spatiales et temporelles. Le problème d'estimation d'hyperparamètres sous-jacent au processus de régularisation a aussi été traité dans un cadre bayésien en utilisant des techniques MCMC. Une validation sur des données réelles anatomiques et fonctionnelles montre que les méthodes proposées réduisent les artéfacts de reconstruction et améliorent la sensibilité/spécificité statistique en IRM fonctionnelle / To reduce scanning time or improve spatio-temporal resolution in some MRI applications, parallel MRI acquisition techniques with multiple coils have emerged since the early 90's as powerful methods. In these techniques, MRI images have to be reconstructed from acquired undersampled « k-space » data. To this end, several reconstruction techniques have been proposed such as the widely-used SENSitivity Encoding (SENSE) method. However, the reconstructed images generally present artifacts due to the noise corrupting the observed data and coil sensitivity profile estimation errors. In this work, we present novel SENSE-based reconstruction methods which proceed with regularization in the complex wavelet domain so as to promote the sparsity of the solution. These methods achieve accurate image reconstruction under degraded experimental conditions, in which neither the SENSE method nor standard regularized methods (e.g. Tikhonov) give convincing results. The proposed approaches relies on fast parallel optimization algorithms dealing with convex but non-differentiable criteria involving suitable sparsity promoting priors. Moreover, in contrast with most of the available reconstruction methods which proceed by a slice by slice reconstruction, one of the proposed methods allows 4D (3D + time) reconstruction exploiting spatial and temporal correlations. The hyperparameter estimation problem inherent to the regularization process has also been addressed from a Bayesian viewpoint by using MCMC techniques. Experiments on real anatomical and functional data show that the proposed methods allow us to reduce reconstruction artifacts and improve the statistical sensitivity/specificity in functional MRI
27

Estimation et Classification de Signaux Altimétriques / Estimation and Classification of Altimetric Signals

Severini, Jérôme 07 October 2010 (has links)
La mesure de la hauteur des océans, des vents de surface (fortement liés aux températures des océans), ou encore de la hauteur des vagues sont un ensemble de paramètres nécessaires à l'étude des océans mais aussi au suivi de leurs évolutions : l'altimétrie spatiale est l'une des disciplines le permettant. Une forme d'onde altimétrique est le résultat de l'émission d'une onde radar haute fréquence sur une surface donnée (classiquement océanique) et de la mesure de la réflexion de cette onde. Il existe actuellement une méthode d'estimation non optimale des formes d'onde altimétriques ainsi que des outils de classifications permettant d'identifier les différents types de surfaces observées. Nous proposons dans cette étude d'appliquer la méthode d'estimation bayésienne aux formes d'onde altimétriques ainsi que de nouvelles approches de classification. Nous proposons enfin la mise en place d'un algorithme spécifique permettant l'étude de la topographie en milieu côtier, étude qui est actuellement très peu développée dans le domaine de l'altimétrie. / After having scanned the ocean levels during thirteen years, the french/american satelliteTopex-Poséidon disappeared in 2005. Topex-Poséidon was replaced by Jason-1 in december 2001 and a new satellit Jason-2 is waited for 2008. Several estimation methods have been developed for signals resulting from these satellites. In particular, estimators of the sea height and wave height have shown very good performance when they are applied on waveforms backscattered from ocean surfaces. However, it is a more challenging problem to extract relevant information from signals backscattered from non-oceanic surfaces such as inland waters, deserts or ices. This PhD thesis is divided into two parts : A first direction consists of developing classification methods for altimetric signals in order to recognize the type of surface affected by the radar waveform. In particular, a specific attention will be devoted to support vector machines (SVMs) and functional data analysis for this problem. The second part of this thesis consists of developing estimation algorithms appropriate to altimetric signals obtained after reflexion on non-oceanic surfaces. Bayesian algorithms are currently under investigation for this estimation problem. This PhD is co-supervised by the french society CLS (Collect Localisation Satellite) (seehttp://www.cls.fr/ for more details) which will in particular provide the real altimetric data necessary for this study.
28

Impacto de saltos no comportamento de preços de commodities / Impact of jumps on commodity prices behavior

Manoel, Paulo Martins Barbosa Fortes 03 December 2012 (has links)
Neste trabalho analisa-se a relevância de saltos no apreçamento de derivativos de commodities através da comparação de dois modelos. O primeiro leva em consideração um convenience yield com reversão à média, enquanto o segundo é uma generalização do primeiro com saltos no preço à vista. Ambos os modelos são estimados por meio de uma abordagem Bayesiana, sendo as distribuições a posteriori simuladas com o uso de técnincas da família MCMC. Dados de petróleo, trigo e cobre são utilizados para fins de estimação. A análise econométrica indica significância estatística para saltos, mas não encontrou-se evidência significativa de que saltos melhoram o apreçamento de derivativos. / In this work we analyze the relevance of jumps in the pricing of commodity contingent claims by comparing two models. The first takes into account mean-reverting convenience yields, and the second is a generalization of the first with jumps in spot prices. Both models were estimated using a Bayesian approach, and posterior distributions where simulated using MCMC techniques. Oil, copper and wheat data where used for estimation proposes. Econometric analysis indicates statistical significance for jumps, but we found no strong evidence that jumps improve derivative pricing.
29

Assimetria de informação no mercado brasileiro de saúde suplementar: testando a eficiência dos planos de cosseguro / Asymmetric information in brazilian private health insurance market: testing the benefice of coinsurance plans

Brunetti, Lucas 14 April 2010 (has links)
A assimetria de informação no sistema de saúde é um tema que ultrapassa o interesse apenas das empresas operadoras de seguro de saúde, de políticas públicas e de pesquisa acadêmica. O presente estudo analisa como os contratos de cosseguro influenciam os fenômenos do risco moral e da seleção adversa presentes nos planos de saúde e sua relação com a demanda de serviços médicos. Neste contexto, analisar a assimetria de informação no sistema de saúde se torna relevante por oferecer uma resposta consistente, que poderá embasar tanto as políticas públicas, quanto a forma de comercialização dos planos pelas empresas. Esse trabalho, a partir da Pesquisa Nacional por Amostra de Domicílios - PNAD 2003, procura observar a eficiência do contrato cosseguro como um mecanismo de mitigação de assimetria de informação, ou seja, excluídos os efeitos dos riscos associados ao indivíduo, se a diferença de contrato altera o comportamento dos agentes. Para atingir esse resultado foi proposto um método para testar a assimetria de informação utilizando o método de Monte Carlo. Os resultados sugerem que os contratos de cosseguros foram eficientes nos planos individuais, enquanto nos planos coletivos sua influência pode ser descartada. Por fim, o trabalho aponta que é mais eficiente, pelo bemestar social, a utilização de cosseguro para os contratos individuais, enquanto para os contratos coletivos são mais eficiente os contratos sem cosseguro. / Asymmetric information in the health care system is a topic of interest for medical insurance, policy makers and scholars. This research analyses how the contracts of coinsurance motivate the moral hazard and adverse selection phenomenon and consequences in medical services demand. In this context, the analysis of asymmetric information in the health care system provides support for the design of public policy and insurance plans. This research aims to estimate a structural model of health insurance and health care choices, using the 2003 National Household Sample Survey PNAD. It tested whether coinsurance contracts can work as efficient mechanisms to reduce risks related to asymmetric information. A methodological procedure using the Monte Carlo method was proposed to test for asymmetric information issues. The research suggests that coinsurance contracts were beneficial for individual plans, from a social welfare perspective. For the group plans, the benefit was not supported
30

Abordagem bayesiana dos modelos de regressão hipsométricos não lineares utilizados em biometria florestal / Bayesian approach for the nonlinear regressian models used in forest biometrics

Thiersch, Monica Fabiana Bento Moreira 25 February 2011 (has links)
Neste trabalho está sendo proposto uma abordagem bayesiana para resolver o problema de inferência com restrição nos parâmetros para os modelos de Petterson, Prodan, Stofel e Curtis, utilizados para representar a relação hipsométrica em clones de Eucalyptus sp. Consideramos quatro diferentes densidades de probabilidade a priori, entre as quais, a densidade a priori não informativa de Jeffreys, a densidade a priori vaga normal flat, uma densidade a priori construída empiricamente e a densidade a priori potência. As estimativas bayesianas foram calculadas com a técnica de simulação de Monte Carlo em Cadeia de Markov (MCMC). Os métodos propostos foram aplicados em vários conjuntos de dados reais e os resultados foram comparados aos obtidos com os estimadores de máxima verossimilhança. Os resultados obtidos com as densidades a priori não informativa e vaga foram semelhantes aos resultados encontrados com os estimadores de máxima verossimilhança, porém, para vários conjuntos de dados, as estimativas não apresentaram coerência biológica. Por sua vez, as densidades a priori informativas empírica e a priori potência sempre produziram resultados coerentes biologicamente, independentemente do comportamento dos dados na parcela, destacando a superioridade desta abordagem / In this work we propose a Bayesian approach to solve the inference problem with restriction on parameters for the models of Petterson, Prodan, Stofel and Curtis used to represent the hypsometric relationship in clones of Eucalyptus sp. We consider four different prior probability densities, the non informative Jeffreys prior, a vague prior with flat normal probability density, a prior constructed empirically and a power prior density. The Bayesian estimates were calculated using the Monte Carlo Markov Chain (MCMC) simulation technique. The proposed methods were applied to several real data sets and the results were compared to those obtained with the maximum likelihood estimators. The results obtained with a non informative prior and prior vague showed similar results to those found with the maximum likelihood estimators, but, for various data sets, the estimates did not show biological coherence. In turn, the methods a prior empirical informative and a prior power, always produce biologically consistent results, regardless of the behavior of the data in the plot, highlighting the superiority of this approach

Page generated in 0.0211 seconds