• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 71
  • 9
  • 6
  • 2
  • 2
  • Tagged with
  • 110
  • 110
  • 81
  • 23
  • 20
  • 18
  • 16
  • 14
  • 13
  • 13
  • 12
  • 11
  • 10
  • 10
  • 9
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
81

Validation des modèles statistiques tenant compte des variables dépendantes du temps en prévention primaire des maladies cérébrovasculaires

Kis, Loredana 07 1900 (has links)
L’intérêt principal de cette recherche porte sur la validation d’une méthode statistique en pharmaco-épidémiologie. Plus précisément, nous allons comparer les résultats d’une étude précédente réalisée avec un devis cas-témoins niché dans la cohorte utilisé pour tenir compte de l’exposition moyenne au traitement : – aux résultats obtenus dans un devis cohorte, en utilisant la variable exposition variant dans le temps, sans faire d’ajustement pour le temps passé depuis l’exposition ; – aux résultats obtenus en utilisant l’exposition cumulative pondérée par le passé récent ; – aux résultats obtenus selon la méthode bayésienne. Les covariables seront estimées par l’approche classique ainsi qu’en utilisant l’approche non paramétrique bayésienne. Pour la deuxième le moyennage bayésien des modèles sera utilisé pour modéliser l’incertitude face au choix des modèles. La technique utilisée dans l’approche bayésienne a été proposée en 1997 mais selon notre connaissance elle n’a pas été utilisée avec une variable dépendante du temps. Afin de modéliser l’effet cumulatif de l’exposition variant dans le temps, dans l’approche classique la fonction assignant les poids selon le passé récent sera estimée en utilisant des splines de régression. Afin de pouvoir comparer les résultats avec une étude précédemment réalisée, une cohorte de personnes ayant un diagnostique d’hypertension sera construite en utilisant les bases des données de la RAMQ et de Med-Echo. Le modèle de Cox incluant deux variables qui varient dans le temps sera utilisé. Les variables qui varient dans le temps considérées dans ce mémoire sont iv la variable dépendante (premier évènement cérébrovasculaire) et une des variables indépendantes, notamment l’exposition / The main interest of this research is the validation of a statistical method in pharmacoepidemiology. Specifically, we will compare the results of a previous study performed with a nested case-control which took into account the average exposure to treatment to : – results obtained in a cohort study, using the time-dependent exposure, with no adjustment for time since exposure ; – results obtained using the cumulative exposure weighted by the recent past ; – results obtained using the Bayesian model averaging. Covariates are estimated by the classical approach and by using a nonparametric Bayesian approach. In the later, the Bayesian model averaging will be used to model the uncertainty in the choice of models. To model the cumulative effect of exposure which varies over time, in the classical approach the function assigning weights according to recency will be estimated using regression splines. In order to compare the results with previous studies, a cohort of people diagnosed with hypertension will be constructed using the databases of the RAMQ and Med-Echo. The Cox model including two variables which vary in time will be used. The time-dependent variables considered in this paper are the dependent variable (first stroke event) and one of the independent variables, namely the exposure.
82

Modelling Primary Energy Consumption under Model Uncertainty

Csereklyei, Zsuzsanna, Humer, Stefan 11 1900 (has links) (PDF)
This paper examines the long-term relationship between primary energy consumption and other key macroeconomic variables, including real GDP, labour force, capital stock and technology, using a panel dataset for 64 countries over the period 1965-2009. Deploying panel error correction models, we find that there is a positive relationship running from physical capital, GDP, and population to primary energy consumption. We observe however a negative relationship between total factor productivity and primary energy usage. Significant differences arise in the magnitude of the cointegration coefficients, when we allow for differences in geopolitics and wealth levels. We also argue that inference on the basis of a single model without taking model uncertainty into account can lead to biased conclusions. Consequently, we address this problem by applying simple model averaging techniques to the estimated panel cointegration models. We find that tackling the uncertainty associated with selecting a single model with model averaging techniques leads to a more accurate representation of the link between energy consumption and the other macroeconomic variables, and to a significantly increased out-of-sample forecast performance. (authors' abstract) / Series: Department of Economics Working Paper Series
83

Bayesian Methods for Genetic Association Studies

Xu, Lizhen 08 January 2013 (has links)
We develop statistical methods for tackling two important problems in genetic association studies. First, we propose a Bayesian approach to overcome the winner's curse in genetic studies. Second, we consider a Bayesian latent variable model for analyzing longitudinal family data with pleiotropic phenotypes. Winner's curse in genetic association studies refers to the estimation bias of the reported odds ratios (OR) for an associated genetic variant from the initial discovery samples. It is a consequence of the sequential procedure in which the estimated effect of an associated genetic marker must first pass a stringent significance threshold. We propose a hierarchical Bayes method in which a spike-and-slab prior is used to account for the possibility that the significant test result may be due to chance. We examine the robustness of the method using different priors corresponding to different degrees of confidence in the testing results and propose a Bayesian model averaging procedure to combine estimates produced by different models. The Bayesian estimators yield smaller variance compared to the conditional likelihood estimator and outperform the latter in the low power studies. We investigate the performance of the method with simulations and applications to four real data examples. Pleiotropy occurs when a single genetic factor influences multiple quantitative or qualitative phenotypes, and it is present in many genetic studies of complex human traits. The longitudinal family studies combine the features of longitudinal studies in individuals and cross-sectional studies in families. Therefore, they provide more information about the genetic and environmental factors associated with the trait of interest. We propose a Bayesian latent variable modeling approach to model multiple phenotypes simultaneously in order to detect the pleiotropic effect and allow for longitudinal and/or family data. An efficient MCMC algorithm is developed to obtain the posterior samples by using hierarchical centering and parameter expansion techniques. We apply spike and slab prior methods to test whether the phenotypes are significantly associated with the latent disease status. We compute Bayes factors using path sampling and discuss their application in testing the significance of factor loadings and the indirect fixed effects. We examine the performance of our methods via extensive simulations and apply them to the blood pressure data from a genetic study of type 1 diabetes (T1D) complications.
84

Bayesian Methods for Genetic Association Studies

Xu, Lizhen 08 January 2013 (has links)
We develop statistical methods for tackling two important problems in genetic association studies. First, we propose a Bayesian approach to overcome the winner's curse in genetic studies. Second, we consider a Bayesian latent variable model for analyzing longitudinal family data with pleiotropic phenotypes. Winner's curse in genetic association studies refers to the estimation bias of the reported odds ratios (OR) for an associated genetic variant from the initial discovery samples. It is a consequence of the sequential procedure in which the estimated effect of an associated genetic marker must first pass a stringent significance threshold. We propose a hierarchical Bayes method in which a spike-and-slab prior is used to account for the possibility that the significant test result may be due to chance. We examine the robustness of the method using different priors corresponding to different degrees of confidence in the testing results and propose a Bayesian model averaging procedure to combine estimates produced by different models. The Bayesian estimators yield smaller variance compared to the conditional likelihood estimator and outperform the latter in the low power studies. We investigate the performance of the method with simulations and applications to four real data examples. Pleiotropy occurs when a single genetic factor influences multiple quantitative or qualitative phenotypes, and it is present in many genetic studies of complex human traits. The longitudinal family studies combine the features of longitudinal studies in individuals and cross-sectional studies in families. Therefore, they provide more information about the genetic and environmental factors associated with the trait of interest. We propose a Bayesian latent variable modeling approach to model multiple phenotypes simultaneously in order to detect the pleiotropic effect and allow for longitudinal and/or family data. An efficient MCMC algorithm is developed to obtain the posterior samples by using hierarchical centering and parameter expansion techniques. We apply spike and slab prior methods to test whether the phenotypes are significantly associated with the latent disease status. We compute Bayes factors using path sampling and discuss their application in testing the significance of factor loadings and the indirect fixed effects. We examine the performance of our methods via extensive simulations and apply them to the blood pressure data from a genetic study of type 1 diabetes (T1D) complications.
85

Essays on forecasting and Bayesian model averaging

Eklund, Jana January 2006 (has links)
This thesis, which consists of four chapters, focuses on forecasting in a data-rich environment and related computational issues. Chapter 1, “An embarrassment of riches: Forecasting using large panels” explores the idea of combining forecasts from various indicator models by using Bayesian model averaging (BMA) and compares the predictive performance of BMA with predictive performance of factor models. The combination of these two methods is also implemented, together with a benchmark, a simple autoregressive model. The forecast comparison is conducted in a pseudo out-of-sample framework for three distinct datasets measured at different frequencies. These include monthly and quarterly US datasets consisting of more than 140 predictors, and a quarterly Swedish dataset with 77 possible predictors. The results show that none of the considered methods is uniformly superior and that no method consistently outperforms or underperforms a simple autoregressive process. Chapter 2. “Forecast combination using predictive measures” proposes using out-of-sample predictive likelihood as the basis for BMA and forecast combination. In addition to its intuitive appeal, the use of the predictive likelihood relaxes the need to specify proper priors for the parameters of each model. We show that the forecast weights based on the predictive likelihood have desirable asymptotic properties. And that these weights will have better small sample properties than the traditional in-sample marginal likelihood when uninformative priors are used. In order to calculate the weights for the combined forecast, a number of observations, a hold-out sample, is needed. There is a trade off involved in the size of the hold-out sample. The number of observations available for estimation is reduced, which might have a detrimental effect. On the other hand, as the hold-out sample size increases, the predictive measure becomes more stable and this should improve performance. When there is a true model in the model set, the predictive likelihood will select the true model asymptotically, but the convergence to the true model is slower than for the marginal likelihood. It is this slower convergence, coupled with protection against overfitting, which is the reason the predictive likelihood performs better when the true model is not in the model set. In Chapter 3. “Forecasting GDP with factor models and Bayesian forecast combination” the predictive likelihood approach developed in the previous chapter is applied to forecasting GDP growth. The analysis is performed on quarterly economic dataset from six countries: Canada, Germany, Great Britain, Italy, Japan and United States. The forecast combination technique based on both in-sample and out-of-sample weights is compared to forecasts based on factor models. The traditional point forecast analysis is extended by considering confidence intervals. The results indicate that forecast combinations based on the predictive likelihood weights have better forecasting performance compared with the factor models and forecast combinations based on the traditional in-sample weights. In contrast to common findings, the predictive likelihood does improve upon an autoregressive process for longer horizons. The largest improvement over the in-sample weights is for small values of hold-out sample sizes, which provides protection against structural breaks at the end of the sample period. The potential benefits of model averaging as a tool for extracting the relevant information from a large set of predictor variables come at the cost of considerable computational complexity. To avoid evaluating all the models, several approaches have been developed to simulate from the posterior distributions. Markov chain Monte Carlo methods can be used to directly draw from the model posterior distributions. It is desirable that the chain moves well through the model space and takes draws from regions with high probabilities. Several computationally efficient sampling schemes, either one at a time or in blocks, have been proposed for speeding up convergence. There is a trade-off between local moves, which make use of the current parameter values to propose plausible values for model parameters, and more global transitions, which potentially allow faster exploration of the distribution of interest, but may be much harder to implement efficiently. Local model moves enable use of fast updating schemes, where it is unnecessary to completely reestimate the new, slightly modified, model to obtain an updated solution. The last fourth chapter “Computational efficiency in Bayesian model and variable selection” investigates the possibility of increasing computational efficiency by using alternative algorithms to obtain estimates of model parameters as well as keeping track of their numerical accuracy. Also, various samplers that explore the model space are presented and compared based on the output of the Markov chain. / Diss. Stockholm : Handelshögskolan, 2006
86

Ponderação bayesiana de modelos utilizando diferentes séries de precipitação aplicada à simulação chuva-vazão na Bacia do Ribeirão da Onça / Ponderação bayesiana de modelos utilizando diferentes séries de precipitação aplicada à simulação chuva-vazão na Bacia do Ribeirão da Onça

Antônio Alves Meira Neto 11 July 2013 (has links)
Neste trabalho foi proposta uma estratégia de modelagem hidrológica para a transformação chuva vazão da Bacia do Ribeirão da Onça (B.R.O) utilizando-se técnicas de auto calibração com análise de incertezas e de ponderação de modelos. Foi utilizado o modelo hidrológico Soil and Water Assessment Tool (SWAT), por ser um modelo que possui uma descrição física e de maneira distribuída dos processos hidrológicos da bacia. Foram propostas cinco diferentes séries de precipitação e esquemas de interpolação espacial a serem utilizados como dados de entrada para o modelo SWAT. Em seguida, utilizou-se o método semiautomático Sequential Uncertainty Fitting ver.-2 (SUFI-2) para a auto calibração e análise de incertezas dos parâmetros do modelo e produção de respostas com intervalos de incerteza para cada uma das séries de precipitação utilizadas. Por fim, foi utilizado o método de ponderação bayesiana de modelos (BMA) para o pós-processamento estocástico das respostas. Os resultados da análise de incerteza dos parâmetros do modelo SWAT indicam uma não adequação do método Soil Conservation Service (SCS) para simulação da geração do escoamento superficial, juntamente com uma necessidade de maior investigação das propriedades físicas do solo da bacia. A análise da precisão e acurácia dos resultados das séries de precipitação em comparação com a resposta combinada pelo método BMA sugerem a última como a mais adequada para a simulação chuva-vazão na B.R.O. / This study proposed an approach to the hydrological modeling of the Ribeirão da Onças Basin (B.R.O) based on automatic calibration and uncertainty analysis methods, together with model averaging. The Soil and Water Assessment Tool (SWAT) was used due to its distributed nature and physical description of hydrologic processes. An ensemble, composed by five different precipitation schemes, based on different sources and spatial interpolation methods was used. The Sequential Uncertainty Fitting ver-2 (SUFI-2) procedure was used for automatic calibration and uncertainty analysis of the SWAT model parameters, together with generation of streamflow simulations with uncertainty intervals. Following, the Bayesian Model Averaging (BMA) was used to merge the different responses into a single probabilistic forecast. The results of the uncertainty analysis for the SWAT parameters show that the Soil Conservation Service (SCS) model for surface runoff prediction may not be suitable for the B.R.O, and that more investigations about the soil physical properties at the Basin are recommended. An analysis of the accuracy and precision of the simulations produced by the precipitation ensemble members against the BMA simulation supports the use of the latter as a suitable framework for streamflow simulations at the B.R.O.
87

Risk factor modeling of Hedge Funds' strategies / Risk factor modeling of Hedge Funds' strategies

Radosavčević, Aleksa January 2017 (has links)
This thesis aims to identify main driving market risk factors of different strategies implemented by hedge funds by looking at correlation coefficients, implementing Principal Component Analysis and analyzing "loadings" for first three principal components, which explain the largest portion of the variation of hedge funds' returns. In the next step, a stepwise regression through iteration process includes and excludes market risk factors for each strategy, searching for the combination of risk factors which will offer a model with the best "fit", based on The Akaike Information Criterion - AIC and Bayesian Information Criterion - BIC. Lastly, to avoid counterfeit results and overcome model uncertainty issues a Bayesian Model Average - BMA approach was taken. Key words: Hedge Funds, hedge funds' strategies, market risk, principal component analysis, stepwise regression, Akaike Information Criterion, Bayesian Information Criterion, Bayesian Model Averaging Author's e-mail: aleksaradosavcevic@gmail.com Supervisor's e-mail: mp.princ@seznam.cz
88

Atualização dinâmica de modelo de regressão logística binária para detecção de fraudes em transações eletrônicas com cartão de crédito / Dynamic update of binary logistic regression model for fraud detection in electronic credit card transactions

Fidel Beraldi 01 December 2014 (has links)
Com o avanço tecnológico e econômico, que facilitaram o processo de comunicação e aumento do poder de compra, transações com cartão de crédito tornaram-se o principal meio de pagamento no varejo nacional e internacional (Bolton e Hand , 2002). Neste aspecto, o aumento do número de transações com cartão de crédito é crucial para a geração de mais oportunidades para fraudadores produzirem novas formas de fraudes, o que resulta em grandes perdas para o sistema financeiro (Chan et al. , 1999). Os índices de fraudes têm mostrado que transações no comércio eletrônico (e-commerce) são mais arriscadas do que transações presencias em terminais, pois aquelas não fazem uso de processos seguros e eficientes de autenticação do portador do cartão, como utilização de senha eletrônica. Como os fraudadores se adaptam rapidamente às medidas de prevenção, os modelos estatísticos para detecção de fraudes precisam ser adaptáveis e flexíveis para evoluir ao longo do tempo de maneira dinâmica. Raftery et al. (2010) desenvolveram um método chamado Dynamic Model Averaging (DMA), ou Ponderação Dinâmica de Modelos, que implementa um processo de atualização contínuo ao longo do tempo. Nesta dissertação, desenvolvemos modelos DMA no espaço de transações eletrônicas oriundas do comércio eletrônico que incorporem as tendências e características de fraudes em cada período de análise. Também desenvolvemos modelos de regressão logística clássica com o objetivo de comparar as performances no processo de detecção de fraude. Os dados utilizados para tal são provenientes de uma empresa de meios de pagamentos eletrônico. O experimento desenvolvido mostra que os modelos DMA apresentaram resultados melhores que os modelos de regressão logística clássica quando analisamos a medida F e a área sob a curva ROC (AUC). A medida F para o modelo DMA ficou em 58% ao passo que o modelo de regressão logística clássica ficou em 29%. Já para a AUC, o modelo DMA alcançou 93% e o modelo de regressão logística clássica 84%. Considerando os resultados encontrados para os modelos DMA, podemos concluir que sua característica de atualização ao longo do tempo se mostra um grande diferencial em dados como os de fraude, que sofrem mudanças de comportamento a todo momento. Deste modo, sua aplicação se mostra adequada no processo de detecção de transações fraudulentas no ambiente de comércio eletrônico. / Regarding technological and economic development, which made communication process easier and increased purchasing power, credit card transactions have become the primary payment method in national and international retailers (Bolton e Hand , 2002). In this scenario, as the number of transactions by credit card grows, more opportunities are created for fraudsters to produce new ways of fraud, resulting in large losses for the financial system (Chan et al. , 1999). Fraud indexes have shown which e-commerce transactions are riskier than card present transactions, since those do not use secure and efficient processes to authenticate the cardholder, such as using personal identification number (PIN). Due to fraudsters adapt quickly to fraud prevention measures, statistical models for fraud detection need to be adaptable and flexible to change over time in a dynamic way. Raftery et al. (2010) developed a method called Dynamic Model Averaging (DMA), which implements a process of continuous updating over time. In this thesis, we develop DMA models within electronic transactions coming from ecommerce environment, which incorporate the trends and characteristics of fraud in each period of analysis. We have also developed classic logistic regression models in order to compare their performances in the fraud detection processes. The database used for the experiment was provided by a electronic payment service company. The experiment shows that DMA models present better results than classic logistic regression models in respect to the analysis of the area under the ROC curve (AUC) and F measure. The F measure for the DMA was 58% while the classic logistic regression model was 29%. For the AUC, the DMA model reached 93% and the classical model reached 84%. Considering the results for DMA models, we can conclude that its update over time characteristic makes a large difference when it comes to the analysis of fraud data, which undergo behavioral changes continuously. Thus, its application has proved to be appropriate for the detection process of fraudulent transactions in the e-commerce environment.
89

Novel pharmacometric methods to improve clinical drug development in progressive diseases / Place de nouvelles approches pharmacométriques pour optimiser le développement clinique des médicaments dans le secteur des maladies progressives

Buatois, Simon 26 November 2018 (has links)
Suite aux progrès techniques et méthodologiques dans le secteur de la modélisation, l’apport de ces approches est désormais reconnu par l’ensemble des acteurs de la recherche clinique et pourrait avoir un rôle clé dans la recherche sur les maladies progressives. Parmi celles-ci les études pharmacométriques (PMX) sont rarement utilisées pour répondre aux hypothèses posées dans le cadre d’études dites de confirmation. Parmi les raisons évoquées, les analyses PMX traditionnelles ignorent l'incertitude associée à la structure du modèle lors de la génération d'inférence statistique. Or, ignorer l’étape de sélection du modèle peut aboutir à des intervalles de confiance trop optimistes et à une inflation de l’erreur de type I. Pour y remédier, nous avons étudié l’apport d’approches PMX innovantes dans les études de choix de dose. Le « model averaging » couplée à un test du rapport de « vraisemblance combiné » a montré des résultats prometteurs et tend à promouvoir l’utilisation de la PMX dans les études de choix de dose. Pour les études dites d’apprentissage, les approches de modélisation sont utilisées pour accroitre les connaissances associées aux médicaments, aux mécanismes et aux maladies. Dans cette thèse, les mérites de l’analyse PMX ont été évalués dans le cadre de la maladie de Parkinson. En combinant la théorie des réponses aux items à un modèle longitudinal, l’analyse PMX a permis de caractériser adéquatement la progression de la maladie tout en tenant compte de la nature composite du biomarqueur. Pour conclure, cette thèse propose des méthodes d’analyses PMX innovantes pour faciliter le développement des médicaments et/ou les décisions des autorités réglementaires. / In the mid-1990, model-based approaches were mainly used as supporting tools for drug development. Restricted to the “rescue mode” in situations of drug development failure, the impact of model-based approaches was relatively limited. Nowadays, the merits of these approaches are widely recognised by stakeholders in healthcare and have a crucial role in drug development for progressive diseases. Despite their numerous advantages, model-based approaches present important drawbacks limiting their use in confirmatory trials. Traditional pharmacometric (PMX) analyses relies on model selection, and consequently ignores model structure uncertainty when generating statistical inference. The problem of model selection is potentially leading to over-optimistic confidence intervals and resulting in a type I error inflation. Two projects of this thesis aimed at investigating the value of innovative PMX approaches to address part of these shortcomings in a hypothetical dose-finding study for a progressive disorder. The model averaging approach coupled to a combined likelihood ratio test showed promising results and represents an additional step towards the use of PMX for primary analysis in dose-finding studies. In the learning phase, PMX is a key discipline with applications at every stage of drug development to gain insight into drug, mechanism and disease characteristics with the ultimate goal to aid efficient drug development. In this thesis, the merits of PMX analysis were evaluated, in the context of Parkinson’s disease. An item-response theory longitudinal model was successfully developed to precisely describe the disease progression of Parkinson’s disease patients while acknowledging the composite nature of a patient-reported outcome. To conclude, this thesis enhances the use of PMX to aid efficient drug development and/or regulatory decisions in drug development.
90

Mají devizové rezervy centrálních bank dopad na inflaci? / Do Central Bank FX Reserves Matter for Inflation?

Keblúšek, Martin January 2020 (has links)
01 Abstract Foreign exchange reserves are a useful tool and a buffer but maintaining an amount that is too large can be costly to the economy. Recent accumulation of these reserves points to the importance of this topic. This thesis focuses on one specific part of the effect of FX reserves on the economy - the inflation. I use panel data for 74 countries from the year 1996 to the year 2017. There is a certain degree of model uncertainty for which this thesis accounts for by using Bayesian model averaging (BMA) estimation technique. The findings from my model averaging estimations show FX reserves to not be of importance for inflation determination with close to no change when altering lags, variables, when limiting the sample to fixed FX regimes nor when limiting the sample to inflation targeting regimes. The most important variables are estimated to be a central bank financial strength proxy, exchange rate depreciation, money supply, inflation targeting, and capital account openness. These results are robust to lag changes, prior changes, and for the most part remain the same when Pooled OLS is used.

Page generated in 0.0561 seconds