• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 71
  • 9
  • 6
  • 2
  • 2
  • Tagged with
  • 110
  • 110
  • 81
  • 23
  • 20
  • 18
  • 16
  • 14
  • 13
  • 13
  • 12
  • 11
  • 10
  • 10
  • 9
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
21

Bayesian methods for the construction of robust chronologies

Lee, Sharen Woon Yee January 2012 (has links)
Bayesian modelling is a widely used, powerful approach for reducing absolute dating uncertainties in archaeological research. It is important that the methods used in chronology building are robust and reflect substantial prior knowledge. This thesis focuses on the development and evaluation of two novel, prior models: the trapezoidal phase model; and the Poisson process deposition model. Firstly, the limitations of the trapezoidal phase model were investigated by testing the model assumptions using simulations. It was found that a simple trapezoidal phase model does not reflect substantial prior knowledge and the addition of a non-informative element to the prior was proposed. An alternative parameterisation was also presented, to extend its use to a contiguous phase scenario. This method transforms the commonly-used abrupt transition model to allow for gradual changes. The second phase of this research evaluates the use of Bayesian model averaging in the Poisson process deposition model. The use of model averaging extends the application of the Poisson process model to remove the subjectivity involved in model selection. The last part of this thesis applies these models to different case studies, including attempts at resolving the Iron Age chronological debate in Israel, at determining the age of an important Quaternary tephra, at refining a cave chronology, and at more accurately modelling the mid-Holocene elm decline in the British Isles. The Bayesian methods discussed in this thesis are widely applicable in modelling situations where the associated prior assumptions are appropriate. Therefore, they are not limited to the case studies addressed in this thesis, nor are they limited to analysing radiocarbon chronologies.
22

Discriminating Between Optimal Follow-Up Designs

Kelly, Kevin Donald 02 May 2012 (has links)
Sequential experimentation is often employed in process optimization wherein a series of small experiments are run successively in order to determine which experimental factor levels are likely to yield a desirable response. Although there currently exists a framework for identifying optimal follow-up designs after an initial experiment has been run, the accepted methods frequently point to multiple designs leaving the practitioner to choose one arbitrarily. In this thesis, we apply preposterior analysis and Bayesian model-averaging to develop a methodology for further discriminating between optimal follow-up designs while controlling for both parameter and model uncertainty.
23

Forecasting the Equity Premium and Optimal Portfolios

Bjurgert, Johan, Edstrand, Marcus January 2008 (has links)
The expected equity premium is an important parameter in many financial models, especially within portfolio optimization. A good forecast of the future equity premium is therefore of great interest. In this thesis we seek to forecast the equity premium, use it in portfolio optimization and then give evidence on how sensitive the results are to estimation errors and how the impact of these can be minimized. Linear prediction models are commonly used by practitioners to forecast the expected equity premium, this with mixed results. To only choose the model that performs the best in-sample for forecasting, does not take model uncertainty into account. Our approach is to still use linear prediction models, but also taking model uncertainty into consideration by applying Bayesian model averaging. The predictions are used in the optimization of a portfolio with risky assets to investigate how sensitive portfolio optimization is to estimation errors in the mean vector and covariance matrix. This is performed by using a Monte Carlo based heuristic called portfolio resampling. The results show that the predictive ability of linear models is not substantially improved by taking model uncertainty into consideration. This could mean that the main problem with linear models is not model uncertainty, but rather too low predictive ability. However, we find that our approach gives better forecasts than just using the historical average as an estimate. Furthermore, we find some predictive ability in the the GDP, the short term spread and the volatility for the five years to come. Portfolio resampling proves to be useful when the input parameters in a portfolio optimization problem is suffering from vast uncertainty.
24

Bayesian Hierarchical Model for Combining Two-resolution Metrology Data

Xia, Haifeng 14 January 2010 (has links)
This dissertation presents a Bayesian hierarchical model to combine two-resolution metrology data for inspecting the geometric quality of manufactured parts. The high- resolution data points are scarce, and thus scatter over the surface being measured, while the low-resolution data are pervasive, but less accurate or less precise. Combining the two datasets could supposedly make a better prediction of the geometric surface of a manufactured part than using a single dataset. One challenge in combining the metrology datasets is the misalignment which exists between the low- and high-resolution data points. This dissertation attempts to provide a Bayesian hierarchical model that can handle such misaligned datasets, and includes the following components: (a) a Gaussian process for modeling metrology data at the low-resolution level; (b) a heuristic matching and alignment method that produces a pool of candidate matches and transformations between the two datasets; (c) a linkage model, conditioned on a given match and its associated transformation, that connects a high-resolution data point to a set of low-resolution data points in its neighborhood and makes a combined prediction; and finally (d) Bayesian model averaging of the predictive models in (c) over the pool of candidate matches found in (b). This Bayesian model averaging procedure assigns weights to different matches according to how much they support the observed data, and then produces the final combined prediction of the surface based on the data of both resolutions. The proposed method improves upon the methods of using a single dataset as well as a combined prediction without addressing the misalignment problem. This dissertation demonstrates the improvements over alternative methods using both simulated data and the datasets from a milled sine-wave part, measured by two coordinate measuring machines of different resolutions, respectively.
25

Forecasting the Equity Premium and Optimal Portfolios

Bjurgert, Johan, Edstrand, Marcus January 2008 (has links)
<p>The expected equity premium is an important parameter in many financial models, especially within portfolio optimization. A good forecast of the future equity premium is therefore of great interest. In this thesis we seek to forecast the equity premium, use it in portfolio optimization and then give evidence on how sensitive the results are to estimation errors and how the impact of these can be minimized.</p><p>Linear prediction models are commonly used by practitioners to forecast the expected equity premium, this with mixed results. To only choose the model that performs the best in-sample for forecasting, does not take model uncertainty into account. Our approach is to still use linear prediction models, but also taking model uncertainty into consideration by applying Bayesian model averaging. The predictions are used in the optimization of a portfolio with risky assets to investigate how sensitive portfolio optimization is to estimation errors in the mean vector and covariance matrix. This is performed by using a Monte Carlo based heuristic called portfolio resampling.</p><p>The results show that the predictive ability of linear models is not substantially improved by taking model uncertainty into consideration. This could mean that the main problem with linear models is not model uncertainty, but rather too low predictive ability. However, we find that our approach gives better forecasts than just using the historical average as an estimate. Furthermore, we find some predictive ability in the the GDP, the short term spread and the volatility for the five years to come. Portfolio resampling proves to be useful when the input parameters in a portfolio optimization problem is suffering from vast uncertainty. </p>
26

Bayesian Hierarchical Models for Model Choice

Li, Yingbo January 2013 (has links)
<p>With the development of modern data collection approaches, researchers may collect hundreds to millions of variables, yet may not need to utilize all explanatory variables available in predictive models. Hence, choosing models that consist of a subset of variables often becomes a crucial step. In linear regression, variable selection not only reduces model complexity, but also prevents over-fitting. From a Bayesian perspective, prior specification of model parameters plays an important role in model selection as well as parameter estimation, and often prevents over-fitting through shrinkage and model averaging.</p><p>We develop two novel hierarchical priors for selection and model averaging, for Generalized Linear Models (GLMs) and normal linear regression, respectively. They can be considered as "spike-and-slab" prior distributions or more appropriately "spike- and-bell" distributions. Under these priors we achieve dimension reduction, since their point masses at zero allow predictors to be excluded with positive posterior probability. In addition, these hierarchical priors have heavy tails to provide robust- ness when MLE's are far from zero.</p><p>Zellner's g-prior is widely used in linear models. It preserves correlation structure among predictors in its prior covariance, and yields closed-form marginal likelihoods which leads to huge computational savings by avoiding sampling in the parameter space. Mixtures of g-priors avoid fixing g in advance, and can resolve consistency problems that arise with fixed g. For GLMs, we show that the mixture of g-priors using a Compound Confluent Hypergeometric distribution unifies existing choices in the literature and maintains their good properties such as tractable (approximate) marginal likelihoods and asymptotic consistency for model selection and parameter estimation under specific values of the hyper parameters.</p><p>While the g-prior is invariant under rotation within a model, a potential problem with the g-prior is that it inherits the instability of ordinary least squares (OLS) estimates when predictors are highly correlated. We build a hierarchical prior based on scale mixtures of independent normals, which incorporates invariance under rotations within models like ridge regression and the g-prior, but has heavy tails like the Zeller-Siow Cauchy prior. We find this method out-performs the gold standard mixture of g-priors and other methods in the case of highly correlated predictors in Gaussian linear models. We incorporate a non-parametric structure, the Dirichlet Process (DP) as a hyper prior, to allow more flexibility and adaptivity to the data.</p> / Dissertation
27

Adaptive Interactive Expectations: Dynamically Modelling Profit Expectations

William Paul Bell Unknown Date (has links)
This thesis aims to develop an alternative expectations model to the Rational Expectations Hypothesis (REH) and adaptive-expectations models, which provides more accurate temporal predictive performance and more closely reflects recent advances in behavioural economics, the ‘science of complexity’ and network dynamics. The model the thesis develops is called Adaptive Interactive Expectations (AIE), a subjective dynamic model of the process of expectations formation. To REH, the AIE model provides both an alternative and a complement. AIE and REH complement one another in that they are diametrically opposite in the following five dimensions, agent intelligence, agent interaction, agent homogeneity, equilibrium assumptions and the rationalisation process. REH and AIE stress the importance of hyper-intelligent agents interacting only via a price signal and near zero-intelligent agents interacting via a network structure, respectively. The complementary nature of AIE and REH provide dual perspectives that enhance analysis. The Dun & Bradstreet (D&B 2008) profit expectations survey is used in the thesis to calibrate AIE and make predictions. The predictive power of the AIE and REH models is compared. The thesis introduces the ‘pressure to change profit expectations index’, px. This index provides the ability to model unknowns within an adaptive dynamic process and combine the beliefs from interactive-expectations, adaptive-expectations and biases that include pessimism, optimism and ambivalence. AIE uses networks to model the flow of interactive-expectations between firms. To overcome the uncertainty over the structure of the interactive network, the thesis uses model-averaging over 121 network topologies. These networks are defined by three variables regardless of their complexity. Unfortunately, the Bayesian technique’s use of the number of variables as a measure of complexity makes it unsuitable for model-averaging over the network topologies. To overcome this limitation in the Bayesian technique, the thesis introduces two model-averaging techniques, ‘runtime-weighted’ and ‘optimal-calibration’. These model-averaging techniques are benchmarked against ‘Bayes-factor model-averaging’ and ‘equal-weighted model-averaging’. In addition to the aggregate called all–firms, the D&B (2008) survey has four divisions, manufacturing durables, manufacturing non–durables, wholesale and retail. To make use of the four divisions, the thesis introduces a ‘link-intensity matrix’ based upon an ‘input-output table’ to improve the calibration of the networks. The transpose of the table is also used in the thesis. The two ‘link-intensity matrices’ are benchmarked against the default, a ‘matrix of ones’. The aggregated and disaggregated versions of AIE are benchmarked against adaptive-expectations to establish whether the interactive-expectations component of AIE add value to the model. The thesis finds that AIE has more predictive power than REH. ‘Optimal-calibration model-averaging’ improves the predictive performance of the better-fitting versions of AIE, which are those versions that use the ‘input-output table’ and ‘matrix of ones’ link-intensity matrices. The ‘runtime-weighted model-averaging’ improves the predictive performance of only the ‘input-output table’ version of AIE. The interactive component of the AIE model improves the predictive performance of all versions of the AIE over adaptive-expectations. There is an ambiguous effect on prediction performance from introducing the ‘input-output table’. However, there is a clear reduction in the predictive performance from introducing its transpose. AIE can inform the debate on government intervention by providing an Agent-Based Model (ABM) perspective on the conflicting mathematical and narrative views proposed by the Greenwald–Stiglitz Theorem and Austrian school, respectively. Additionally, AIE can provide a complementary role to REH, which is descriptive/predictive and normative, respectively. The AIE network calibration uses an ‘input-output table’ to determine the link-intensity; this method could provide Computable General Equilibrium (CGE) and Dynamic Stochastic General Equilibrium (DSGE) with a way to improve their transmission mechanism. Furthermore, the AIE network calibration and prediction methodology may help overcome the validation concerns of practitioners when they implement ABM.
28

Adaptive Interactive Expectations: Dynamically Modelling Profit Expectations

William Paul Bell Unknown Date (has links)
This thesis aims to develop an alternative expectations model to the Rational Expectations Hypothesis (REH) and adaptive-expectations models, which provides more accurate temporal predictive performance and more closely reflects recent advances in behavioural economics, the ‘science of complexity’ and network dynamics. The model the thesis develops is called Adaptive Interactive Expectations (AIE), a subjective dynamic model of the process of expectations formation. To REH, the AIE model provides both an alternative and a complement. AIE and REH complement one another in that they are diametrically opposite in the following five dimensions, agent intelligence, agent interaction, agent homogeneity, equilibrium assumptions and the rationalisation process. REH and AIE stress the importance of hyper-intelligent agents interacting only via a price signal and near zero-intelligent agents interacting via a network structure, respectively. The complementary nature of AIE and REH provide dual perspectives that enhance analysis. The Dun & Bradstreet (D&B 2008) profit expectations survey is used in the thesis to calibrate AIE and make predictions. The predictive power of the AIE and REH models is compared. The thesis introduces the ‘pressure to change profit expectations index’, px. This index provides the ability to model unknowns within an adaptive dynamic process and combine the beliefs from interactive-expectations, adaptive-expectations and biases that include pessimism, optimism and ambivalence. AIE uses networks to model the flow of interactive-expectations between firms. To overcome the uncertainty over the structure of the interactive network, the thesis uses model-averaging over 121 network topologies. These networks are defined by three variables regardless of their complexity. Unfortunately, the Bayesian technique’s use of the number of variables as a measure of complexity makes it unsuitable for model-averaging over the network topologies. To overcome this limitation in the Bayesian technique, the thesis introduces two model-averaging techniques, ‘runtime-weighted’ and ‘optimal-calibration’. These model-averaging techniques are benchmarked against ‘Bayes-factor model-averaging’ and ‘equal-weighted model-averaging’. In addition to the aggregate called all–firms, the D&B (2008) survey has four divisions, manufacturing durables, manufacturing non–durables, wholesale and retail. To make use of the four divisions, the thesis introduces a ‘link-intensity matrix’ based upon an ‘input-output table’ to improve the calibration of the networks. The transpose of the table is also used in the thesis. The two ‘link-intensity matrices’ are benchmarked against the default, a ‘matrix of ones’. The aggregated and disaggregated versions of AIE are benchmarked against adaptive-expectations to establish whether the interactive-expectations component of AIE add value to the model. The thesis finds that AIE has more predictive power than REH. ‘Optimal-calibration model-averaging’ improves the predictive performance of the better-fitting versions of AIE, which are those versions that use the ‘input-output table’ and ‘matrix of ones’ link-intensity matrices. The ‘runtime-weighted model-averaging’ improves the predictive performance of only the ‘input-output table’ version of AIE. The interactive component of the AIE model improves the predictive performance of all versions of the AIE over adaptive-expectations. There is an ambiguous effect on prediction performance from introducing the ‘input-output table’. However, there is a clear reduction in the predictive performance from introducing its transpose. AIE can inform the debate on government intervention by providing an Agent-Based Model (ABM) perspective on the conflicting mathematical and narrative views proposed by the Greenwald–Stiglitz Theorem and Austrian school, respectively. Additionally, AIE can provide a complementary role to REH, which is descriptive/predictive and normative, respectively. The AIE network calibration uses an ‘input-output table’ to determine the link-intensity; this method could provide Computable General Equilibrium (CGE) and Dynamic Stochastic General Equilibrium (DSGE) with a way to improve their transmission mechanism. Furthermore, the AIE network calibration and prediction methodology may help overcome the validation concerns of practitioners when they implement ABM.
29

Ponderação de modelos com aplicação em regressão logística binária.

Brocco, Juliane Bertini 18 April 2006 (has links)
Made available in DSpace on 2016-06-02T20:06:12Z (GMT). No. of bitstreams: 1 DissJBB.pdf: 632747 bytes, checksum: 7f6e8caa78736a965ecb167ee27b7cc3 (MD5) Previous issue date: 2006-04-18 / Universidade Federal de Sao Carlos / This work consider the problem of how to incorporate model selection uncertainty into statistical inference, through model averaging, applied to logistic regression. It will be used the approach of Buckland et. al. (1997), that proposed an weighed estimator to a parameter common to all models in study, where the weights are obtained by information criteria or bootstrap method. Also will be applied bayesian model averaging as shown by Hoeting et. al. (1999), where posterior probability is an average of the posterior distributions under each of the models considered, weighted by their posterior model probability. The aim of this work is to study the behavior of the weighed estimator, both, in the classic approach and in the bayesian, in situations that consider the use of binary logistic regression, with foccus in prediction. The known model-choice selection method Stepwise will be considered as form of comparison of the predictive performance in relation to model averaging. / Esta dissertação considera o problema de incorporação da incerteza devido à escolha do modelo na inferência estatística, segundo a abordagem de ponderação de modelos, com aplicação em regressão logística. Será utilizada a abordagem de Buckland et. al. (1997), que propuseram um estimador ponderado para um parâmetro comum a todos os modelos em estudo, sendo que, os pesos desta ponderação são obtidos a partir do uso de critérios de informação ou do método bootstrap. Também será aplicada a ponderação bayesiana de modelos como apresentada por Hoeting et. al. (1999), onde a distribuição a posteriori do parâmetro de interesse é uma média da distribuição a posteriori do parâmetro sob cada modelo em consideração ponderado por suas respectivas probabilidades a posteriori. O objetivo deste trabalho é estudar o comportamento do estimador ponderado, tanto na abordagem clássica como na bayesiana, em situações que consideram o uso de regressão logística binária, com enfoque na estimação da predição. O método de seleção de modelos Stepwise será considerado como forma de comparação da capacidade preditiva em relação ao método de ponderação de modelos.
30

Determinanty cen umělecké fotografie na aukcích / Price Determinants of Art Photography at Auctions

Habalová, Veronika January 2018 (has links)
In the recent years, prices of art have repeatedly broken records, and the interest in investing in fine art photography has been growing. Although there is plenty of research dedicated to studying prices of paintings, fine art photography has been largely overlooked. This thesis aims to shed light on identifying price determinants for this particular medium. A new data set is collected from sold lot archives of Sotheby's and Phillips auction houses, which also provide images of some of the sold items. These images are then used to create new variables describing visual attributes of the artworks. In order to inspect the effect of color-related predictors on price, four different methods are discussed. Color is found to be significant in OLS model, but the effect diminishes when model averaging is applied. Machine learning al- gorithms - regression trees and random forests - suggest that the importance of color is relatively low. The thesis also shows that expert estimates can improved by incorporating available information and using random forests for prediction. The fact that the expert estimates are not very accurate sug- gest that they either do not use all the available information or they do not process it efficiently. 1

Page generated in 0.0582 seconds