• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 72
  • 9
  • 6
  • 2
  • 2
  • Tagged with
  • 111
  • 111
  • 82
  • 23
  • 20
  • 18
  • 16
  • 15
  • 13
  • 13
  • 12
  • 12
  • 10
  • 10
  • 9
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
71

ESSAYS IN NONSTATIONARY TIME SERIES ECONOMETRICS

Xuewen Yu (13124853) 26 July 2022 (has links)
<p>This dissertation is a collection of four essays on nonstationary time series econometrics, which are grouped into four chapters. The first chapter investigates the inference in mildly explosive autoregressions under unconditional heteroskedasticity. The second chapter develops a new approach to forecasting a highly persistent time series that employs feasible generalized least squares (FGLS) estimation of the deterministic components in conjunction with Mallows model averaging. The third chapter proposes new bootstrap procedures for detecting multiple persistence shifts in a time series driven by nonstationary volatility. The last chapter studies the problem of testing partial parameter stability in cointegrated regression models.</p>
72

Three Essays in Inference and Computational Problems in Econometrics

Todorov, Zvezdomir January 2020 (has links)
This dissertation is organized into three independent chapters. In Chapter 1, I consider the selection of weights for averaging a set of threshold models. Existing model averaging literature primarily focuses on averaging linear models, I consider threshold regression models. The theory I developed in that chapter demonstrates that the proposed jackknife model averaging estimator achieves asymptotic optimality when the set of candidate models are all misspecified threshold models. The simulations study demonstrates that the jackknife model averaging estimator achieves the lowest mean squared error when contrasted against other model selection and model averaging methods. In Chapter 2, I propose a model averaging framework for the synthetic control method of Abadie and Gardeazabal (2003) and Abadie et al. (2010). The proposed estimator serves a twofold purpose. First, it reduces the bias in estimating the weights each member of the donor pool receives. Secondly, it accounts for model uncertainty for the program evaluation estimation. I study two variations of the model, one where model weights are derived by solving a cross-validation quadratic program and another where each candidate model receives equal weights. Next, I show how to apply the placebo study and the conformal inference procedure for both versions of my estimator. With a simulation study, I reveal that the superior performance of the proposed procedure. In Chapter 3, which is co-authored with my advisor Professor Youngki Shin, we provide an exact computation algorithm for the maximum rank correlation estimator using the mixed integer programming (MIP) approach. We construct a new constrained optimization problem by transforming all indicator functions into binary parameters to be estimated and show that the transformation is equivalent to the original problem. Using a modern MIP solver, we apply the proposed method to an empirical example and Monte Carlo simulations. The results show that the proposed algorithm performs better than the existing alternatives. / Dissertation / Doctor of Philosophy (PhD)
73

A Comprehensive Approach to Posterior Jointness Analysis in Bayesian Model Averaging Applications

Crespo Cuaresma, Jesus, Grün, Bettina, Hofmarcher, Paul, Humer, Stefan, Moser, Mathias 03 1900 (has links) (PDF)
Posterior analysis in Bayesian model averaging (BMA) applications often includes the assessment of measures of jointness (joint inclusion) across covariates. We link the discussion of jointness measures in the econometric literature to the literature on association rules in data mining exercises. We analyze a group of alternative jointness measures that include those proposed in the BMA literature and several others put forward in the field of data mining. The way these measures address the joint exclusion of covariates appears particularly important in terms of the conclusions that can be drawn from them. Using a dataset of economic growth determinants, we assess how the measurement of jointness in BMA can affect inference about the structure of bivariate inclusion patterns across covariates. (authors' abstract) / Series: Department of Economics Working Paper Series
74

Validation des modèles statistiques tenant compte des variables dépendantes du temps en prévention primaire des maladies cérébrovasculaires

Kis, Loredana 07 1900 (has links)
L’intérêt principal de cette recherche porte sur la validation d’une méthode statistique en pharmaco-épidémiologie. Plus précisément, nous allons comparer les résultats d’une étude précédente réalisée avec un devis cas-témoins niché dans la cohorte utilisé pour tenir compte de l’exposition moyenne au traitement : – aux résultats obtenus dans un devis cohorte, en utilisant la variable exposition variant dans le temps, sans faire d’ajustement pour le temps passé depuis l’exposition ; – aux résultats obtenus en utilisant l’exposition cumulative pondérée par le passé récent ; – aux résultats obtenus selon la méthode bayésienne. Les covariables seront estimées par l’approche classique ainsi qu’en utilisant l’approche non paramétrique bayésienne. Pour la deuxième le moyennage bayésien des modèles sera utilisé pour modéliser l’incertitude face au choix des modèles. La technique utilisée dans l’approche bayésienne a été proposée en 1997 mais selon notre connaissance elle n’a pas été utilisée avec une variable dépendante du temps. Afin de modéliser l’effet cumulatif de l’exposition variant dans le temps, dans l’approche classique la fonction assignant les poids selon le passé récent sera estimée en utilisant des splines de régression. Afin de pouvoir comparer les résultats avec une étude précédemment réalisée, une cohorte de personnes ayant un diagnostique d’hypertension sera construite en utilisant les bases des données de la RAMQ et de Med-Echo. Le modèle de Cox incluant deux variables qui varient dans le temps sera utilisé. Les variables qui varient dans le temps considérées dans ce mémoire sont iv la variable dépendante (premier évènement cérébrovasculaire) et une des variables indépendantes, notamment l’exposition / The main interest of this research is the validation of a statistical method in pharmacoepidemiology. Specifically, we will compare the results of a previous study performed with a nested case-control which took into account the average exposure to treatment to : – results obtained in a cohort study, using the time-dependent exposure, with no adjustment for time since exposure ; – results obtained using the cumulative exposure weighted by the recent past ; – results obtained using the Bayesian model averaging. Covariates are estimated by the classical approach and by using a nonparametric Bayesian approach. In the later, the Bayesian model averaging will be used to model the uncertainty in the choice of models. To model the cumulative effect of exposure which varies over time, in the classical approach the function assigning weights according to recency will be estimated using regression splines. In order to compare the results with previous studies, a cohort of people diagnosed with hypertension will be constructed using the databases of the RAMQ and Med-Echo. The Cox model including two variables which vary in time will be used. The time-dependent variables considered in this paper are the dependent variable (first stroke event) and one of the independent variables, namely the exposure.
75

Bayesian and Frequentist Approaches for the Analysis of Multiple Endpoints Data Resulting from Exposure to Multiple Health Stressors.

Nyirabahizi, Epiphanie 08 March 2010 (has links)
In risk analysis, Benchmark dose (BMD)methodology is used to quantify the risk associated with exposure to stressors such as environmental chemicals. It consists of fitting a mathematical model to the exposure data and the BMD is the dose expected to result in a pre-specified response or benchmark response (BMR). Most available exposure data are from single chemical exposure, but living objects are exposed to multiple sources of hazards. Furthermore, in some studies, researchers may observe multiple endpoints on one subject. Statistical approaches to address multiple endpoints problem can be partitioned into a dimension reduction group and a dimension preservative group. Composite scores using desirability function is used, as a dimension reduction method, to evaluate neurotoxicity effects of a mixture of five organophosphate pesticides (OP) at a fixed mixing ratio ray, and five endpoints were observed. Then, a Bayesian hierarchical model approach, as a single unifying dimension preservative method is introduced to evaluate the risk associated with the exposure to mixtures chemicals. At a pre-specied vector of BMR of interest, the method estimates a tolerable area referred to as benchmark dose tolerable area (BMDTA) in multidimensional Euclidean plan. Endpoints defining the BMDTA are determined and model uncertainty and model selection problems are addressed by using the Bayesian Model Averaging (BMA) method.
76

Důchodová elasticita poptávky po vodě: Meta-analýza / Income Elasticity of Water Demand: A Meta-Analysis

Vlach, Tomáš January 2016 (has links)
If policymakers address water scarcity with the demand-oriented approach, the income elasticity of water demand is of pivotal importance. Its estimates, however, differ considerably. We collect 307 estimates of the income elasticity of water demand reported in 62 studies, codify 31 variables describing the estimation design, and employ Bayesian model averaging to address model uncertainty inherent to any meta-analysis. The studies were published between 1972 and 2015, which means that this meta-analysis covers a longer period of time than two previous meta-analyses on this topic combined. Our results suggest that income elasticity estimates for developed countries do not significantly differ from income elasticity estimates for developing countries and that different estimation techniques do not systematically produce different values of the income elasticity of water demand. We find evidence of publication selection bias in the literature on the income elasticity of water demand with the use of both graphical and regression analysis. We correct the estimates for publication selection bias and estimate the true effect beyond bias, which reaches approximately 0.2. 1
77

Ohodnocování a predikce systémového rizika: Systém včasného varovaní navržený pro Českou republiku / Systemic Risks Assessment and Systemic Events Prediction: Early Warning System Design for the Czech Republic

Žigraiová, Diana January 2013 (has links)
This thesis develops an early warning system framework for assessing systemic risks and for predicting systemic events, i.e. periods of extreme financial instability with potential real costs, over the short horizon of six quarters and the long horizon of twelve quarters on the panel of 14 countries both advanced and developing. Firstly, Financial Stress Index is built aggregating indicators from equity, foreign exchange, security and money markets in order to identify starting dates of systemic financial crises for each country in the panel. Secondly, the selection of early warning indicators for assessment and prediction of systemic risks is undertaken in a two- step approach; relevant prediction horizons for each indicator are found by means of a univariate logit model followed by the application of Bayesian model averaging method to identify the most useful indicators. Next, logit models containing useful indicators only are estimated on the panel while their in-sample and out-of-sample performance is assessed by a variety of measures. Finally, having applied the constructed EWS for both horizons to the Czech Republic it was found that even though models for both horizons perform very well in-sample, i.e. both predict 100% of crises, only the long model attains the maximum utility of 0,5 as...
78

Obchodovaný objem a očekávané výnosy akcií: metaanalýza / Trading volume and expected stock returns: a meta-analysis

Bajzík, Josef January 2019 (has links)
I investigate the relationship between expected stock returns and trading volume. I collect together 522 estimates from 46 studies and conduct the first meta-analysis in this field. Use of Bayesian model averaging and Frequentist model averaging help me to discover the most influential factors that affect the return-volume relationship, since I control for more than 50 differences among primary articles such as midyear and type of data, length of the primary dataset, size of market, or model employed. In the end, I find out that the relation between expected stock returns and trading volume is rather negligible. On the other hand, the contemporaneous relation between returns and volume is positive. These two findings cut the mixed results from previously written studies. Moreover, the investigated relationship is influenced by the size of country of interest and the level of its development. Besides the primary studies that employ higher data frequency provide substantially larger estimates than the studies with data from longer time periods. On the contrary, there is no difference among different estimation methodologies used. Finally, I employ classical and modern techniques such as stem-based methodology for publication bias detection, and I find evidence for it in this field. 1
79

Ponderação bayesiana de modelos utilizando diferentes séries de precipitação aplicada à simulação chuva-vazão na Bacia do Ribeirão da Onça / Ponderação bayesiana de modelos utilizando diferentes séries de precipitação aplicada à simulação chuva-vazão na Bacia do Ribeirão da Onça

Meira Neto, Antônio Alves 11 July 2013 (has links)
Neste trabalho foi proposta uma estratégia de modelagem hidrológica para a transformação chuva vazão da Bacia do Ribeirão da Onça (B.R.O) utilizando-se técnicas de auto calibração com análise de incertezas e de ponderação de modelos. Foi utilizado o modelo hidrológico Soil and Water Assessment Tool (SWAT), por ser um modelo que possui uma descrição física e de maneira distribuída dos processos hidrológicos da bacia. Foram propostas cinco diferentes séries de precipitação e esquemas de interpolação espacial a serem utilizados como dados de entrada para o modelo SWAT. Em seguida, utilizou-se o método semiautomático Sequential Uncertainty Fitting ver.-2 (SUFI-2) para a auto calibração e análise de incertezas dos parâmetros do modelo e produção de respostas com intervalos de incerteza para cada uma das séries de precipitação utilizadas. Por fim, foi utilizado o método de ponderação bayesiana de modelos (BMA) para o pós-processamento estocástico das respostas. Os resultados da análise de incerteza dos parâmetros do modelo SWAT indicam uma não adequação do método Soil Conservation Service (SCS) para simulação da geração do escoamento superficial, juntamente com uma necessidade de maior investigação das propriedades físicas do solo da bacia. A análise da precisão e acurácia dos resultados das séries de precipitação em comparação com a resposta combinada pelo método BMA sugerem a última como a mais adequada para a simulação chuva-vazão na B.R.O. / This study proposed an approach to the hydrological modeling of the Ribeirão da Onças Basin (B.R.O) based on automatic calibration and uncertainty analysis methods, together with model averaging. The Soil and Water Assessment Tool (SWAT) was used due to its distributed nature and physical description of hydrologic processes. An ensemble, composed by five different precipitation schemes, based on different sources and spatial interpolation methods was used. The Sequential Uncertainty Fitting ver-2 (SUFI-2) procedure was used for automatic calibration and uncertainty analysis of the SWAT model parameters, together with generation of streamflow simulations with uncertainty intervals. Following, the Bayesian Model Averaging (BMA) was used to merge the different responses into a single probabilistic forecast. The results of the uncertainty analysis for the SWAT parameters show that the Soil Conservation Service (SCS) model for surface runoff prediction may not be suitable for the B.R.O, and that more investigations about the soil physical properties at the Basin are recommended. An analysis of the accuracy and precision of the simulations produced by the precipitation ensemble members against the BMA simulation supports the use of the latter as a suitable framework for streamflow simulations at the B.R.O.
80

Atualização dinâmica de modelo de regressão logística binária para detecção de fraudes em transações eletrônicas com cartão de crédito / Dynamic update of binary logistic regression model for fraud detection in electronic credit card transactions

Beraldi, Fidel 01 December 2014 (has links)
Com o avanço tecnológico e econômico, que facilitaram o processo de comunicação e aumento do poder de compra, transações com cartão de crédito tornaram-se o principal meio de pagamento no varejo nacional e internacional (Bolton e Hand , 2002). Neste aspecto, o aumento do número de transações com cartão de crédito é crucial para a geração de mais oportunidades para fraudadores produzirem novas formas de fraudes, o que resulta em grandes perdas para o sistema financeiro (Chan et al. , 1999). Os índices de fraudes têm mostrado que transações no comércio eletrônico (e-commerce) são mais arriscadas do que transações presencias em terminais, pois aquelas não fazem uso de processos seguros e eficientes de autenticação do portador do cartão, como utilização de senha eletrônica. Como os fraudadores se adaptam rapidamente às medidas de prevenção, os modelos estatísticos para detecção de fraudes precisam ser adaptáveis e flexíveis para evoluir ao longo do tempo de maneira dinâmica. Raftery et al. (2010) desenvolveram um método chamado Dynamic Model Averaging (DMA), ou Ponderação Dinâmica de Modelos, que implementa um processo de atualização contínuo ao longo do tempo. Nesta dissertação, desenvolvemos modelos DMA no espaço de transações eletrônicas oriundas do comércio eletrônico que incorporem as tendências e características de fraudes em cada período de análise. Também desenvolvemos modelos de regressão logística clássica com o objetivo de comparar as performances no processo de detecção de fraude. Os dados utilizados para tal são provenientes de uma empresa de meios de pagamentos eletrônico. O experimento desenvolvido mostra que os modelos DMA apresentaram resultados melhores que os modelos de regressão logística clássica quando analisamos a medida F e a área sob a curva ROC (AUC). A medida F para o modelo DMA ficou em 58% ao passo que o modelo de regressão logística clássica ficou em 29%. Já para a AUC, o modelo DMA alcançou 93% e o modelo de regressão logística clássica 84%. Considerando os resultados encontrados para os modelos DMA, podemos concluir que sua característica de atualização ao longo do tempo se mostra um grande diferencial em dados como os de fraude, que sofrem mudanças de comportamento a todo momento. Deste modo, sua aplicação se mostra adequada no processo de detecção de transações fraudulentas no ambiente de comércio eletrônico. / Regarding technological and economic development, which made communication process easier and increased purchasing power, credit card transactions have become the primary payment method in national and international retailers (Bolton e Hand , 2002). In this scenario, as the number of transactions by credit card grows, more opportunities are created for fraudsters to produce new ways of fraud, resulting in large losses for the financial system (Chan et al. , 1999). Fraud indexes have shown which e-commerce transactions are riskier than card present transactions, since those do not use secure and efficient processes to authenticate the cardholder, such as using personal identification number (PIN). Due to fraudsters adapt quickly to fraud prevention measures, statistical models for fraud detection need to be adaptable and flexible to change over time in a dynamic way. Raftery et al. (2010) developed a method called Dynamic Model Averaging (DMA), which implements a process of continuous updating over time. In this thesis, we develop DMA models within electronic transactions coming from ecommerce environment, which incorporate the trends and characteristics of fraud in each period of analysis. We have also developed classic logistic regression models in order to compare their performances in the fraud detection processes. The database used for the experiment was provided by a electronic payment service company. The experiment shows that DMA models present better results than classic logistic regression models in respect to the analysis of the area under the ROC curve (AUC) and F measure. The F measure for the DMA was 58% while the classic logistic regression model was 29%. For the AUC, the DMA model reached 93% and the classical model reached 84%. Considering the results for DMA models, we can conclude that its update over time characteristic makes a large difference when it comes to the analysis of fraud data, which undergo behavioral changes continuously. Thus, its application has proved to be appropriate for the detection process of fraudulent transactions in the e-commerce environment.

Page generated in 0.1092 seconds