• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 71
  • 9
  • 6
  • 2
  • 2
  • Tagged with
  • 110
  • 110
  • 81
  • 23
  • 20
  • 18
  • 16
  • 14
  • 13
  • 13
  • 12
  • 11
  • 10
  • 10
  • 9
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
61

Mixture Model Averaging for Clustering

Wei, Yuhong 30 April 2012 (has links)
Model-based clustering is based on a finite mixture of distributions, where each mixture component corresponds to a different group, cluster, subpopulation, or part thereof. Gaussian mixture distributions are most often used. Criteria commonly used in choosing the number of components in a finite mixture model include the Akaike information criterion, Bayesian information criterion, and the integrated completed likelihood. The best model is taken to be the one with highest (or lowest) value of a given criterion. This approach is not reasonable because it is practically impossible to decide what to do when the difference between the best values of two models under such a criterion is ‘small’. Furthermore, it is not clear how such values should be calibrated in different situations with respect to sample size and random variables in the model, nor does it take into account the magnitude of the likelihood. It is, therefore, worthwhile considering a model-averaging approach. We consider an averaging of the top M mixture models and consider applications in clustering and classification. In the course of model averaging, the top M models often have different numbers of mixture components. Therefore, we propose a method of merging Gaussian mixture components in order to get the same number of clusters for the top M models. The idea is to list all the combinations of components for merging, and then choose the combination corresponding to the biggest adjusted Rand index (ARI) with the ‘reference model’. A weight is defined to quantify the importance of each model. The effectiveness of mixture model averaging for clustering is proved by simulated data and real data under the pgmm package, where the ARI from mixture model averaging for clustering are greater than the one of corresponding best model. The attractive feature of mixture model averaging is it’s computationally efficiency; it only uses the conditional membership probabilities. Herein, Gaussian mixture models are used but the approach could be applied effectively without modification to other mixture models. / Paul McNicholas
62

Bayesian Multiregression Dynamic Models with Applications in Finance and Business

Zhao, Yi January 2015 (has links)
<p>This thesis discusses novel developments in Bayesian analytics for high-dimensional multivariate time series. The focus is on the class of multiregression dynamic models (MDMs), which can be decomposed into sets of univariate models processed in parallel yet coupled for forecasting and decision making. Parallel processing greatly speeds up the computations and vastly expands the range of time series to which the analysis can be applied. </p><p>I begin by defining a new sparse representation of the dependence between the components of a multivariate time series. Using this representation, innovations involve sparse dynamic dependence networks, idiosyncrasies in time-varying auto-regressive lag structures, and flexibility of discounting methods for stochastic volatilities.</p><p>For exploration of the model space, I define a variant of the Shotgun Stochastic Search (SSS) algorithm. Under the parallelizable framework, this new SSS algorithm allows the stochastic search to move in each dimension simultaneously at each iteration, and thus it moves much faster to high probability regions of model space than does traditional SSS. </p><p>For the assessment of model uncertainty in MDMs, I propose an innovative method that converts model uncertainties from the multivariate context to the univariate context using Bayesian Model Averaging and power discounting techniques. I show that this approach can succeed in effectively capturing time-varying model uncertainties on various model parameters, while also identifying practically superior predictive and lucrative models in financial studies. </p><p>Finally I introduce common state coupled DLMs/MDMs (CSCDLMs/CSCMDMs), a new class of models for multivariate time series. These models are related to the established class of dynamic linear models, but include both common and series-specific state vectors and incorporate multivariate stochastic volatility. Bayesian analytics are developed including sequential updating, using a novel forward-filtering-backward-sampling scheme. Online and analytic learning of observation variances is achieved by an approximation method using variance discounting. This method results in faster computation for sequential step-ahead forecasting than MCMC, satisfying the requirement of speed for real-world applications. </p><p>A motivating example is the problem of short-term prediction of electricity demand in a "Smart Grid" scenario. Previous models do not enable either time-varying, correlated structure or online learning of the covariance structure of the state and observational evolution noise vectors. I address these issues by using a CSCMDM and applying a variance discounting method for learning correlation structure. Experimental results on a real data set, including comparisons with previous models, validate the effectiveness of the new framework.</p> / Dissertation
63

A fault diagnosis technique for complex systems using Bayesian data analysis

Lee, Young Ki 01 April 2008 (has links)
This research develops a fault diagnosis method for complex systems in the presence of uncertainties and possibility of multiple solutions. Fault diagnosis is a challenging problem because data used in diagnosis contain random errors and often systematic errors as well. Furthermore, fault diagnosis is basically an inverse problem so that it inherits unfavorable characteristics of inverse problems: The existence and uniqueness of an inverse solution are not guaranteed and the solution may be unstable. The weighted least squares method and its variations are traditionally used for solving inverse problems. However, the existing algorithms often fail to identify multiple solutions if they are present. In addition, the existing algorithms are not capable of selecting variables systematically so that they generally use the full model in which may contain unnecessary variables as well as necessary variables. Ignoring this model uncertainty often gives rise to, so called, the smearing effect in solutions, because of which unnecessary variables are overestimated and necessary variables are underestimated. The proposed method solves the inverse problem using Bayesian inference. An engineering system can be parameterized using state variables. The probability of each state variable is inferred from observations made on the system. A bias in an observation is treated as a variable, and the probability of the bias variable is inferred as well. To take the uncertainty of model structure into account, multiple Bayesian models are created with various combinations of the state variables and the bias variables. The results from all models are averaged according to how likely each model is. Gibbs sampling is used for approximating updated probabilities. The method is demonstrated for two applications: the status matching of a turbojet engine and the fault diagnosis of an industrial gas turbine. In the status matching application only physical faults in the components of a turbojet engine are considered whereas in the fault diagnosis application sensor biases are considered as well as physical faults. The proposed method is tested in various faulty conditions using simulated measurements. Results show that the proposed method identifies physical faults and sensor biases simultaneously. It is also demonstrated that multiple solutions can be identified. Overall, there is a clear improvement in ability to identify correct solutions over the full model that contains all state and bias variables.
64

Income Inequality and Economic Growth: A Meta-Analysis / Income Inequality and Economic Growth: A Meta-Analysis

Posvyanskaya, Alexandra January 2018 (has links)
The impact of inequality on economic growth has become a topic of broad and current interest. Multiple researches investigated the issue but the disparity of opinions and empirical results is huge. The present thesis revises the pri- mary literature through a meta-analytical approach applying Bayesian Model Averaging (BMA) estimation technique. We examine 562 estimates collected from 58 studies published between 1991 and 2015. I find the evidence of the publication bias presence in the literature. The authors of primary studies tend to report preferentially negative and significant estimates. The BMA results suggest that the effect of inequality on growth is not straightforward and is likely not linear. A single pattern for inequality/growth relationship is not fea- sible since the results vary across used income inequality measures, estimation methods and data structure and quality. JEL Classification D31, O10, C11, C82 Keywords meta-analysis, inequality, economic growth, Bayesian model averaging, publication bias Author's e-mail 23376990@fsv.cuni.cz Supervisor's e-mail zuzana.havrankova@fsv.cuni.cz
65

Bankruptcy prediction models in the Czech economy: New specification using Bayesian model averaging and logistic regression on the latest data / Bankruptcy prediction models in the Czech economy: New specification using Bayesian model averaging and logistic regression on the latest data

Kolísko, Jiří January 2017 (has links)
The main objective of our research was to develop a new bankruptcy prediction model for the Czech economy. For that purpose we used the logistic regression and 150,000 financial statements collected for the 2002-2016 period. We defined 41 explanatory variables (25 financial ratios and 16 dummy variables) and used Bayesian model averaging to select the best set of explanatory variables. The resulting model has been estimated for three prediction horizons: one, two, and three years before bankruptcy, so that we could assess the changes in the importance of explanatory variables and models' prediction accuracy. To deal with high skew in our dataset due to small number of bankrupt firms, we applied over- and under- sampling methods on the train sample (80% of data). These methods proved to enhance our classifier's accuracy for all specifications and periods. The accuracy of our models has been evaluated by Receiver operating characteristics curves, Sensitivity-Specificity curves, and Precision-Recall curves. In comparison with models examined on similar data, our model performed very well. In addition, we have selected the most powerful predictors for short- and long-term horizons, which is potentially of high relevance for practice. JEL Classification C11, C51, C53, G33, M21 Keywords Bankruptcy...
66

Tři eseje o finančním rozvoji / Three Essays on Financial Development

Mareš, Jan January 2020 (has links)
The dissertation is a compilation of three empirical papers on the effects of financial development. In the first paper, we examine finance's effect on long-term economic growth using Bayesian model averaging to address model uncertainty. Our global sample findings indicate that the efficiency of financial intermediation is robustly related to long-term growth. The second and third papers investigate the determinants of wealth and income inequality, capturing various economic, financial, political, institutional, and geographical factors. We reveal that finance plays a considerable role in shaping both distributions.
67

Ekonomická nerovnost a percepce štěstí: Meta-analýza / Income Inequality and Happiness: A Meta-Analysis

Kamenická, Lucie January 2021 (has links)
The relationship between income inequality and happiness is central to a host of welfare policies. If higher income inequality puts people down, advocating for income redistribution from the rich to the poor could make society happier. We show, however, that this popular consensus on the relationship's direction is rather absent in the academic literature. Based on the 868 observations col- lected from 53 studies and controlling for 62 aspects of study design, we use state-of-the-art meta-analysis techniques to identify several important drivers of the efect. Unless each study gets the same weight, the literature is driven by publication bias pushing the estimates against the popular consensus. While geographical diferences dominate among the systematic infuences of the re- lationship's magnitude, the relationship is also strongly afected by various methods and data the authors use in the primary studies. Most prominently, it matters if authors control for diferent individual's characteristics, such as perceived trust in people or their health status.
68

Paradoxes and Priors in Bayesian Regression

Som, Agniva 30 December 2014 (has links)
No description available.
69

Quantification of Multiple Types of Uncertainty in Physics-Based Simulation

Park, Inseok 15 December 2012 (has links)
No description available.
70

ESSAYS IN NONSTATIONARY TIME SERIES ECONOMETRICS

Xuewen Yu (13124853) 26 July 2022 (has links)
<p>This dissertation is a collection of four essays on nonstationary time series econometrics, which are grouped into four chapters. The first chapter investigates the inference in mildly explosive autoregressions under unconditional heteroskedasticity. The second chapter develops a new approach to forecasting a highly persistent time series that employs feasible generalized least squares (FGLS) estimation of the deterministic components in conjunction with Mallows model averaging. The third chapter proposes new bootstrap procedures for detecting multiple persistence shifts in a time series driven by nonstationary volatility. The last chapter studies the problem of testing partial parameter stability in cointegrated regression models.</p>

Page generated in 0.0636 seconds